Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Kernel 2.2.14 301

So everyone and their unkle wrote in to tell us that Linux 2.2.14 has officially been released. If your uptime isn't to sacred to ya, it may be worth upgrading. You know where to get the good stuff if ya need it.
This discussion has been archived. No new comments can be posted.

Linux Kernel 2.2.14

Comments Filter:
  • Freud logic would suggest that guys with long uptimes are compensating for a certain "short coming" in other areas, if you know what I mean. So since you don't think a long uptime is a big deal you probably have a long penis and don't need to compensate.

    Of course you probably know that and are trying to trick us in to thinking you don't care about long uptimes to make us think you have a long penis. What kind of excuse is that? script kiddies? puh-leeeze. you might as well have said that you don't want to ruin the environment by consuming the electricity needed for longer uptimes. like anyone would believe that.

    So I'm calling your bluff. I bet you've got really long uptimes.

    And for the record, my uptime is just the right size. Not too big, not too small. So don't even bother asking.

  • by Anonymous Coward
    I'm new to linux. What's the purpose of upgrading the kernel? is that something you need to do a lot? if your system is running ok, do you still need to upgrade?

    I'm not actually planning on doing it, as I'm not proficient enough to recompile and replace a kernel yet, but I need to know how soon I have to try and take this step.

    thanks.

  • by Anonymous Coward
    DISCLAIMER: This is _not_ a troll or flame - just one person's view of the situation with Linux (and FreeBSD for that matter...)

    I recently switched from Linux to OpenBSD and NetBSD on my ix86 and Alpha machines after a year's evaluation of the free *BSDs on a couple of spare boxes; now I'm going to detail the reasons for the switch and elicit (hopefully constructive) feedback from /. readers.

    Mind you, I did not switch for any political reasons (i.e. - GPL/FSF vs. BSD/MIT/X) - I actually prefer the 'restriction' of always-available source-code+improvements of the GPL and am philosophically aligned with the FSF.

    Nor did I switch for preferring a 'Cathedral' development model over the 'Bazaar'.

    Though I'm quite happy with most of the developers and contributors, I switched because I am becoming increasingly dissatisfied with the general, overall _attitude_ and _focus_ of the Linux community and, moreover, the practical and pragmatic potential pitfalls that I see Linux moving towards.

    Here's a synopsis of my view:

    1. Preoccupation with Microsoft

    Why is the Linux community _so_ vocal about their dissatisfaction/hatred of Microsoft Corporation? Isn't this obsession somewhat absurd?

    If proprietary and monopolistic are so terrible, why the soft spots (much less criticism at least) over Apple, Be, Sun or other Unix/hardware vendors? All of the above are really no better than MS and would *love* to be in Microsoft's place, reducing consumer choice and limiting their own liability.

    How is this a liability to the Linux community? How about the proposed khttp? Four years ago, a kernel service like this would be unthinkable and considered an anathema to good Unix design; the post-Mindcraft attitude makes this permissible.

    Is 'one-upping' NT4+IIS by introducing real dangers all that necessary to win corporate/popular mindshare? Why is this happening?

    2. Lack of focus and the 'World-Domination' syndrome

    Lack of focus for the general Linux community probably stems from the 'Bazaar' development model. While not necessarily bad (it is indeed the most democratic and fair development model, much more so over *BSDs), now that Linux has become more popular, the community now suffers from the 'World Domination' syndrome.

    Does _every_ device need to be supported under Linux? Even devices whose quality and performance are questionable, when similarly-priced, higher-quality devices are available?

    Personally, I use devices and only spec devices to others that I know are well-supported and whose vendors provide specs/development help to developers. For example, I only use Buslogic Multimaster and QLogic ISP-based SCSI controllers because they work, they don't time out, and their support does not vary (greatly) from one firmware revision to the next. The manufacturers have seen Linux as a viable market from the beginning and have provided specs/documentation to minimize the amount of reverse-engineering for developers. These are the vendors we should support, and perhaps the others will see free software as a viable market as well.

    However, the Linux community wants *every* device supported, seemingly again to 'one-up' the other OSes (*BSDs, but particularly MS) and 'win over' more users. And as a result, the quality of support for many devices ranges from excellent to severely abysmal...

    Can we decide on a range of devices to support (the level proportional to the amount of support from the vendors) and _really_ support those devices?

    3. Lastly, the details and fine-tuning code.

    For the most part, the kernel maintainers do an excellent job considering the volume and rapid rate of development on the kernel. Even so, is this rapid rate of development really conducive to a necessarily better kernel and/or OS? Also, are we ignoring potential pitfalls with the 'release early and often' methodology?

    The details (an admittedly minor but poignant detail): disk device files and partitioning:

    • Is the /dev/[s,h]da entry really satisfactory? Considering that Linux is moving towards enterprise computing with multiple disk controllers as a standard, shouldn't we be moving towards a controller-disk-partition naming scheme?
    • Speaking of disk devices, shouldn't we drop the DOS disklabelling and go with BSD disklabels? They seem to make much more sense...
    • Smaller details like NFS have always seemed to be ignored; in my situation, NFS seems more broken with 2.2.x than it was in 2.0.x. And we're not even at NFS v3.0 yet. Again, if we're going to pursue the enterprise computing realm (on heterogeneous networks), things like this must be addressed sooner than support for the latest 3dfx game card.
    • Bugfixes in the Linux world are indeed rapid, but shouldn't we also consider how our fixes now can affect development in the future?

    Let's compare this to the *BSD world... addressing the issues above.

    1. The *BSDs are obviously not MS-friendly, but they don't seem to have many loud-mouths (i.e.- ESR) disparaging the Gates Empire at every opportunity. Though beset by lack of publicity (I believe they are waiting for the Linux hype to subside before pursuing the limelight), their users are growing, and increasingly including many ex-Linux users :(

    2., 3. The *BSD teams are exceptionally well-focused, and that shows in the overall quality of code and distributions. Certainly the development is far less democratic, but it seems the users truly benefit from the focus on 'getting it right the first time.'

    The hardware support (peripherals, not architectures since NetBSD supports just about arch) in *BSDs is limited in comparison to Linux but there is a marked difference - nearly all the supported hardware works well. For a new user (or a boss you're trying to impress), this is definitely preferable to wide-inclusion/spotty performance that we have under Linux.

    Also, NFS works perfectly under *BSD. This is probably the number one reason why I switched.

    Although the IP stack under Linux is good, it's even better on *BSD.

    *BSD pays attention to the details. Bug fixes are fairly rapid (though not instantaneous as under Linux) but the maintainers always strive to make *correct* bug fixes - that is, fixes that won't become a potential problem six- ten- twelve- months from now...

    *BSDs use proper disklabels, partitioning and device file naming schemes from the start; no need to worry about managing multiple host adapters, etc.

    Sorry for the ranting... actually, I hope this will be considered constructive criticism.

    The Linux community _does_ have an unusually high proportion of talented programmers, but, because of some of the more _social_ issues above, may lose mindshare and advantage to other free OSes...

    ~AC

    Okay, I'll concede... Actually, I find that the Debian [debian.org] distribution, with its long release cycles, thorough and robust package management and exceptionally high-quality (in the Linux world) packages is the Linux distribution that approaches the quality of Net/OpenBSD. They obviously have extremely stringent standards (though not quite at *BSD level) and seem to attempt to 'get it right the first time'. Kudos to the Debian team!

  • by Anonymous Coward
    general rule (which doesn't only apply to kernels): if you don't need to upgrade, don't. reasons for upgrading might be:
    1. you have some hardware, and a driver for it was added/fixed in the new kernel
    2. there was some grevious security hole that was fixed (not the case with this release)
    3. there was some grevious file corruption bug that was fixed (again, not the case)
    4. for some reason, you upgrade a different piece of software, and it requires a newer kernel (far more common when upgrading from 2.0 to 2.2 than from 2.2.13 to 2.2.14, so it probably won't apply)
    5. you feel the need for whiz-bang features, in which case you're probably already running the unstable kernels :), and consequently you wouldn't be asking this question
    usually, reason #1 is the reason for upgrading. if you look at the changelogs for stable kernels, you'll find that the vast majority of changes are tiny (or occasionally not so tiny) fixes with obscure pieces of hardware that run on obscure systems (okay, maybe a slight exaggeration). anyway, unlike with the 2.3 kernels, you won't find any new whiz-bang features in the 2.2 kernels, so if you don't need to upgrade, don't!
  • NcFTP Software [ncftpd.com] makes alot of great claims about the loads that their server can handle but I have yet to run into a heavily hit server that is running one. It would make a much cooler selling point if they could say it survives being a mirror of stable Linux kernel updates.
  • by Kirth ( 183 )
    Applied immediately the 0.02c ext3-patch, patch
    had no problems (except two obvious rejects of
    2 lines of code in the kdb-patch, and further
    2 lines in the ext3 patch), but it failed to
    compile... I guess I'll have to wait until
    Stephen Tweedie fixes this. And of course, the
    missing ext3 is the sole reason that keeps me
    from trying out 2.3.x-kernels.

    Kirth
  • Ah, the great Slashdot Moderation Catch-22. Moderators only pay attention when a story first appears, using up their points early. After it gets over 100 posts, they only read the high-scoring ones other moderators have bumped up, sometimes bumping them up further. So you end up with a bunch of 4 or 5 point posts, and all the others remain at the bottom of the barrel. Only the truely brave dare go down there and read them.
  • Or, given that I've not actually contributed to the kernel (yet),

    Ah, let me tell you the joy one feels as a bonefied kernel contributor. I remember it like it was yesterday. Picture it, 1997 (or so), I guess something in the 2.1 line. A kernel is released, and some goofy driver that I used, a sound card or something, has a typo. Seizing the opportunity, I expertly craft a patch to handle this mishap, using all the resources at my disposal. Then, crossing my fingers, I wade through the compile to ensure everything works. 15 minutes later, it's compiled.

    Then, the ever so daunting reboot and test it routine. I edit lilo.conf with a fury like I've never edited lilo.conf before. Run it, reboot, and closly watch the scrolling messages. Then, like the sun rising in the morning, I see it. Before my eyes is the startup lines for that piece of hardware, the IRQ, the IO address. It was a beauty I had only seen before when I actually paid attention to the bootup. Finally, after all was booted up, I did the final test. Playing a sound file, or whatever the thing was. And it worked! "Eureka!!" I shouted from my desk. The neighbor's dog starts barking. "Yes, boy. It is true. I have fixed the kernel," I reassure the young pup.

    Hastily I booted up my PPP scripts. The modem fires up with it's random assortment of buzzes and bings. Then I see those magical characters from my ISP, "Your IP is now: 123.45.67.89." (IP changed to protect the innocent.) Without skipping a beat, I fire up pine and type one heck of an email to the kernel list. "Um, there was a typo in the xxx driver of 2.1.xx. Patch below," (an approximation, my memory isn't that good).

    Anxiously I waited, day after day. "When is that crazy Fin gonna release a new kernel with my patch??" I asked myself. Each morning I awoke, checked the sunsite mirrors. With each passing day I only got more anxious to see my patch in all it's glory. And then, one day it happened. All my vigilant waiting had finally paid off. Linus released another revision of the kernel! I downloaded that patch like no other patch in the world, gunzipped it, and fired up less. Remembering which file it was I had to skillfully edit, I executed a search for it. Then a quiet peace fell upon the world. It was as though all the powers in the universe were converging to celebrate my patch. For there, upon that console screen, on that cold, wintry day (I'm guessing, it adds atmosphere to the story), I see the modification over which I toiled so diligently. I was now, a Linux kernel contributor.

    That is my story, and that is all I have to say about that.
  • by demon ( 1039 )
    No. It works as another PCI bus (which it really is - AGP is just a modified PCI bus), so AGP cards work fine, you just can't use the GART (graphics address remapping table), which is used for 3D texturing using texture data from system RAM. Support for this is in the 2.3.x kernel, but it doesn't seem to be very well-supported as yet...
  • by demon ( 1039 )
    1) AGP has been in the kernel for quite some time.

    As I said in another post, AGP is not explicitly supported in the current stable kernels, it's just used as another PCI bus (which is largely how the system treats/sees it). 2.3.x includes developing support for the special features of AGP (texture data in system RAM and (maybe?) DMA support).
  • I've used patch with other srcs before and not
    had any problems but for some reason I can't
    figure out how to get it to work with the Linux
    src. I have the linux-2.2.12 src and the files
    patch-2.2.14 and patch-2.2.13. What do I need to do?

    lrwxrwxrwx 1 root root 12 Jan 4 17:58 linux -> linux-2.2.12/
    drwxr-xr-x 18 root root 1024 Jan 4 17:40 linux-2.2.12/
    drwxr-xr-x 2 root root 12288 Nov 14 18:28 lost+found/
    -rw-rw-r-- 1 root root 3111595 Jan 4 17:37 patch-2.2.13
    -rw-rw-r-- 1 root root 7269094 Jan 4 17:37 patch-2.2.14
  • yowch. better be careful there... ever filled /? It ain't pretty.
  • Uptime is an important thing at times. On servers which thousands of users access regularly you had better not keep the machine down for long. Some admins who don't fully understand what they are doing are all too ready to install the newest software on busy servers occationally creating problems. Problems result in downtime which users don't normally like.
    Sure if you are running a single user machine upgrade by all means but the only reasons to upgrade real servers are if the new kernels has security patches and possible imporovements in the drivers for your specific hardware. Otherwise keep the uptime high, the users appreciate it.
  • Why do you want to make a monolithic kernel (minus the scsi controller)?
    Building a modular kernel isn't that hard, just a matter of putting " make modules ; make modules_install " after your " make dep ; make clean ; make zImage ; ".
    There are several cons of building a monolithic kernel these days. In my experience the first think I have is that, for me, it's too big to be a monolithic one (even with make bzImage).
    That's why I build everything I can as modules.
    They're simple (IMO) and smart. If set up right (usually the case) they are only loaded when a program needs them and unloaded shortly after I turn off the program (my bttv modules do that anyway). That ensures that it only uses the RAM it needs and no more.
    One question though, that SCSI controller isn't the one you hook up your scsi harddrive to (I'm assuming your harddrives are SCSI only..) is it? in which case you'll HAVE TO build support for it in the kernel itself (no modules here).
  • Yeah, you can tell this is just soooo accurate.
    Their #1 uptime system is a HPUX box that's been up for over 100 years.
  • it's a figure of speech meaning essentially: infinity + 1.
    i've seen the expression formulated as "everyone and their [dog|uncle|brother]".

    -l
  • you mentioned all reasons for/against upgrading the kernel very good.

    i just want to point out security holes

    with rule "not unless you need to fix a bug, or gain a feature." you can consider not to upgrade just because of for example tulip driver error because you are not experiencing troubles even when using tulip card.
    BUT if this tulip bug is security hole you SHOULD update even when usual usage is OK. hackers are not producing usual thinkg on systems :)

    disclaimer: take it as advice to newbies.

  • I figured security fixes were lumped in with bug fixes

    exactly. i just "explained" special case of bug fix to newbies (i hope).

  • Any suggestions on what I could do to:
    ...
    2) formulate a useful bug report?

    take a look at /usr/src/linux/REPORTING-BUGS (2.2.X).

  • To be honest you shouldn't worry as the uptime for Linux boxes in general is as high as most other platforms already.
    It is not uptiem alone that sells an OS to other people. If you can upgrade your kernel to take advantage of new functionality then that is more likely to make Linux appear robust, usable and so on.
  • Nice. People at kernel.org must be really happy now when you've shown THAT link instead of the mirrors-link [kernel.org]. Of course, it might not be mirrored everywhere yet..
  • by Sesse ( 5616 )
    Not only IDE corruption -- filesystem corruption in general. Flushing all blocks would also mean flushing all _dirty_ blocks.

    In addition, I've got some TCP/IP problems I hope 2.2.14 will fix, like returning fd 0 on accept() and generally breaking under high loads.

    /* Steinar */
  • To be fair, AGP DMA transfers aren't supported by Linux, AFAIK, since Intel won't let go of the specs.

    Also note that there are two (or more) AGP specifications that would have to be supported: Intel's, VIA's, and whoever else puts out an AGP chipset.

    jf

  • I've used ATI MACH64 FB driver before (back in 2.1.13X I think). Do you have a different chipset?
  • I figured security fixes were lumped in with bug fixes, but I get your point. :)
    ----
  • VERY VERY difficult, if not impossible ("hot-swapable kernels"). Been discussed some time last year, check kernel archives. The HURD would more likely be possible to do this being a microkernel.

    my $.02

    Steve
  • My department moved to our new building about a year ago. In that transition I had to take down our "first linux production server" down to move it. I had a 112 day update.

    It was plugged into a UPS and I was so tempted to move it without shutting down. Unfortunately the good side of me took over and I shut down the machine. I'd be at about 250 days or so of uptime after the move if it wasn't for the great-popcorn-burning-and-setting-off-the-fire-sup ression-system-power-outage of 1999. (Of course, myself and my fellow admins were late to the party as we were at lunch. The modem that is hooked up to the machine that is supposed to page us in the event of a power outage wasn't on the UPS. We maybe would have been back 5 minutes earlier. It wouldn't have really helped us, but it was a bit of egg on our face.)

    I'm at 71 days on that machine, so I'm happy. I really wanted to shoot for a year....maybe I'll get it this time.
  • Yes, S/390s are very cool - I used to be sysadmin for a UNIX on Amdahl (IBM mainframe clone). It was so fast at I/O that grepping a big file (producing no output) was instantaneous - I thought the grep command wasn't working but it was just going blindingly fast, on a mid-1980s vintage mainframe at that...

    As for Linux on VAX, see http://www.mssl.ucl.ac.uk/~atp/linux-vax/ - there is a port in progress, though right now NetBSD seems a better choice for everday use.
  • Of course, somebody is hoping to port Linux to the AS/400 - see http://users.snip.net/~gbooker/as400.htm for details.

    Given that the AS/400 has such an unusual architecture, it seems like a very difficult port, far more so than the S/390, but you never know...
  • PPPoE is in the 2.3 series, so I imagine it will be in the 2.4 series. Right now, building there are kernel patches to get it to work, which seem to give the least overhead, and there are a few userspace apps. You can find almost everything that's available here. [sympaticousers.org] (The one by David Skoll is apparently the best userspace app.)
  • Wow, I must be some kind of kernel hacking God (kidding) because I have been using AGP with the stable kernel for about a year. Me thinkest thou art mistaken, my friend.
  • "... but I don't know how to make it so that I am using ONLY the latest module..."

    When selecting what to include in or exclude from the kernel, choose "m" to include items as modules.

    After you do "make zImage" or "make zlilo" or "make vmlinux" or whatever your choice for building the monolithic part of the kernel, issue "make modules" to build the modules, and then "make modules_install" to install all of the modules. If you don't already have this version of the kernel/modules installed, you've got nothing to worry about here--2.2.14's modules will all be in /lib/modules/2.2.14; when you boot/run with 2.2.14, the only modules that get loaded will be the ones from that directory.
  • Dude, you wouldn't happen to have the ext3 instlal instructions?
    would you?
    i lost mine, and linux.org.uk seems down
  • Which "all those smaller upgrades" are you talking about?

    The only upgrades that cause me to bring a system down are kernel upgrades and hardware upgrades. I can do all other upgrades without rebooting the machine.

    If my kernel supports my hardware, and there's not some major bug in it, what's the point of upgrading it? If my hardware is capable of doing the job, what's the point of upgrading it?

    If it works, don't fix it...
  • by BJH ( 11355 )

    Not particularly, asshole. I was correcting his acronym error, not indicating that USB support wasn't available yet.

    BTW, USB support is included in the standard 2.2 kernel. Look under linux/drivers/usb.

    PS: Go crawl back under your rock and die, you stupid piece of shit.
  • Theoretically is may be possible. It's not possible from a practical point of view.

    Think about what you'd have to code if an internal structure changes (this happens a lot!). It would add to much code only going from one version to the next. I think they effort of implementing such a thing doesn't compare to the downtime of a few minutes that you would have.

    If your uptime is really that sacred, that don't upgrade. Apparently your system is already stable enough that a reboot for a kernel upgrade is considered harmful. This implies that the current version of the kernel already suits your needs in terms of stability. Then there is no reason to upgrade anyway.

    If security or stability is an issue, then you should take the time to reboot. Consider the time you lose because someone cracks your system or because unnecesary instability and you know why.

    Regards,

    Marco.
  • Well, the new kernel might just contain some security related fix, in which case it is recommended that you upgrade... in my experience, I've always upgraded kernels shortly after release, just so I can have the newest one, just in case. Have yet to see a bug in any of the kernels, though... I've been working with Linux from 2.0.29 up to now.
  • I know that some of the S/390's come in massively parallel configurations - like 12. Well, OK, it's massively parallel for me. Does the kernel have a limitation on the amount of processors? I thought that it was four per box.
  • Thanks. I just hope that everyone realizes that not everyone is a well established junkie yet - I knew where to get them but maybe some folks did not.
  • A few weeks ago I went into work to find the local paper with a front page article stating the utility company of this small town was going to upgrade the substations

    You know, our local power company needed to "upgrade" our area again a few weeks ago. Nevermind that they'd done an all night (1am to 6am) outage less than 6 months before, but this time they decided it would be more convenient to shut power down at 5am on a Saturday! Allowing for up to 12 hours to do the upgrade of course! Surprise surprise, they started an hour late, and barely got the power back up before 6pm.. when it was already getting dark.

    There's not a whole lot you can do at home on a Saturday with no freaking power. Needless to say noone in the neighborhood is happy with them right now. God forbid they might schedule an outage of this length for either a weekday when people will be at work, or better yet, during the night again when the impact to people will be minimal.

  • by Anonymous Coward
    No, in this case it makes sense. It's more of a motherboard chipset thing than a video card thing. AGP lets you transfer large amounts of texture memory (or something like that) over the AGP bus, using main memory as texture memory. You can't do that without changing the kernel (or writing a module). It's not a graphics library issue.
  • by Gleef ( 86 )
    As far as I can tell, AGP support has been in the kernel (as part of the PCI driver) since at least version 2.0.34, dated 1998-07-04. So it's been in there for at least a year and a half. Whatever problems you're having with your webcam, it should have nothing to do with "missing" AGP support.

    ----
  • The original poster was complaining about AGP support because he wanted to use his Webcam. Digital cameras give simple 2D images, since when does this require kernel-level GART support?

    ----
  • by whoop ( 194 )
    For all of you that say AGP cards work in the kernel already, that's only half true. The Utah GLX [openprojects.net] has a kernel module that lets you use all the nifty fast memory transfers over the AGP interface. Sure the kernels can run AGP video cards, but this makes them much faster (or so I understand). The patch was already merged into the 2.3 kernels, but not the 2.2 ones. You can get it from the project's CVS server.
  • by whoop ( 194 )
    Was the error something about an "event" variable? They changed that in the fs code to like global_event.

    I made up a patch [mtco.com] based on 2.2.14pre18. It applies and compiles cleanly with this final 2.2.14. I took out some of the kdb stuff, because well, I don't care about debugging it. :) I guess the usual caveats when not dealing with the real author of something like this apply. But I've been up for almost three hours now with it (and several weeks in the 14pre kernels), and all filesystems are working just peachy.
  • by whoop ( 194 )
    mount /dev/whatever /mnt
    dd if=/dev/zero of=/mnt/journal.dat
    ls -i /mnt/journal.dat
    umount /mnt
    mount /dev/whatever /mnt -t ext3 -o journal=

    The -o journal= is only needed the first time. Afterwards just edit /etc/fstab and put an appropriate entry for the mount point and set it to ext3. Hmm, I forget now what the kernel parameter was to do this for the root partition. Ah well, I guess you'll just have to wait for their site to come back, or find a mirror. Maybe Google has it cached...
  • It is not a difficult task to patch/configure/compile/replace a kernel, but the sequence of tasks is rather exactly.

    What you state is indeed true, _however_...
    Most newbies, in the US, at least, run RedHat.
    RH, like the majority of distros, does not ship
    stock kernel sources.
    This can cause some parts of a patch to fail.
    Not necessarily fatally, but certainly enough
    to be disquieting to a newbie. I would suggest
    either downloading the full sources _or_
    waiting for RH to release an updated kernel rpm.
    The former being the preferred method.
    Patches are a good thing once you have
    pristine sources to run them on.
  • Supermount isn't integrated into 2.2.14, however I have the patch mirrored here:

    http://www.fargocity.co m/~ccondit/supermount-2.2.14-1.patch [fargocity.com]

    This has been modified from the original supermount patch to patch cleanly on 2.2.14-final (md.c failed before).
  • The power outages seem to win...

    A few weeks ago I went into work to find the local paper with a front page article stating the utility company of this small town was going to upgrade the substations. I had a spare car battery hooked to the UPS and thought it would ride through. Uh huh... looking at the logs, it looked like the battery was a half an hour shy of the four hours of no power.
  • They were a big problem around 2.2.10, were claimed to be fixed in 2.2.11, but happenened for me and a few others as late as 2.2.12. Hopefully they're all ironed out by now.

    --
  • I saw on Kernel Traffic that there were some freaky filesystem corruption problems in the 2.2.x series... and I think I may have experienced this. Anyone know if this has been nailed down yet?
    ----
  • Checklist posted earlier was for 2.3.x -> 2.4

    This article was about the 2.2.14 release. :)
    ----
  • What's the purpose of upgrading the kernel?

    To either fix bugs, or to gain features

    is that something you need to do a lot?

    not unless you need to fix a bug, or gain a feature. :)

    if your system is running ok, do you still need to upgrade?

    See above.

    I'm not actually planning on doing it, as I'm not proficient enough to recompile and replace a kernel yet, but I need to know how soon I have to try and take this step.
    You should look at the kernel HOWTO at www.linuxdoc.org/HOWTO [linuxdoc.org] - it's a bit daunting at first, but it's really not that difficult. The main reason to recompile your kernel is to tailor it exactly to your needs, removing all the cruft that doesn't apply to your system. Plus, as someone said, you'll never be a real Linux user until you do. :)
    ----

  • I'm always a bit afraid to send in a bug report, because I'm not sure I know enough to put together a good one, and I don't want to increase the static on the list...

    I did get a kernel oops that seemed ext2 related, and sent it to the ext2 maintaner, but got no reply.

    Any suggestions on what I could do to:

    1) be sure this really was a bug, and
    2) formulate a useful bug report?

    This last time, I got a whole slew of:

    Jan 1 10:44:05 Lager kernel: EXT2-fs error (device ide0(3,7)): ext2_add_entry:
    bad entry in directory #11873: rec_len is smaller than minimal - offset=0, inode
    =11873, rec_len=6, name_len=1

    entries in the log, and there were in fact several problems when I fsck'd. (This was with 2.2.13) Also, interesting (but probably unrelated) datapoint... it was the postfix files/directories that were damaged each time...
    ----
  • Always send to linux-kernel@vger.rutgers.edu

    Ok, ok, I get it! :) I promise to do that next time. As I said, though, I wanted to make sure that I didn't bother the list with something unimportant, and it's hard for me to tell sometimes...

    Compiling 2.2.14 as we speak....
    ----

  • Let me get this straight; you think somebody will replace your kernel with whatever they want, reboot, and the uptime will be your tipoff?

    And you think this hypothetical hacker will be savvy enough to code holes into the kernel (as opposed to into the utilities, such as login, which authenticate users), but won't be savvy enough to fake the uptime?

    Uhm, sure, yeah, that'll happen. You'll be warned of it by the monkeys flying out of my butt.

    Yes, in that extremely bizarre contrived case, it would be a security hole. However, if we're assuming the attacker can replace the kernel with one of this choosing, *EVERY* Unix-like OS has that hole. They have the hole now.

    If your system is that owned, you shouldn't be trusting what any piece of software on it tells you, not even the BIOS if you're flash upgradeable.
  • Hmmm. That's bad. however, that description isn't a very good bug-report. I suggest you mail a careful description of what system you have and what your problems are. Compile-time related, missing/non-working drivers, file- corruption, hangs/oops:es/panic's, etc.

    We do all we can to fix bugs, but we can't fix them unless people report them (ok, we do sometimes when we stumble over silly code by accident, when doing rehauling of code or when having little else to do, but those cases are not in majority.)

    linux-kernel@vger.rutgers.edu is the place where you report your problems. Good luck!

  • by BJH ( 11355 )

    AGP? Methinks you mean USB...

    Boy, your face must be red right about now ;)
  • Wouldn't it be possible to make a VM that boots the new kernel, then copies itself into the older kernels memory space?

    Yes I guess that would work. The main problem would be getting hold of the other kernel's memory - you wouldn't have to do any copying at all, but you have to get the memory from somewhere. Presumeably you'd be running under VMWare, or better, an open source vm os, and there would be some kind of api for sharing memory among the various client os's. You'd use that to recover the memory from the halted os. Then figure out how to hot-swap your VM os and you're done. :-)
  • Wouldn't it be possible to make a VM that boots the new kernel, then copies itself into the older kernels memory space? That way, supposing everything went smoothly, no reboot would be needed? Just curious if that's a possibility or not? it'd have to be handled in the kernel, so as to allow itself to be overwritten... Memory protection seems like it would be a hassel for anything but SW inside the kernel to accomplish that.
  • but some stuff is still broke (for my set-up, anyhow).

    I don't think that checklist posted earlier was completed.

    *sigh* i guess that'll be worked out in the 'ac' patches...

  • List of Kernel Mirrors [kernel.org].

    KernelNotes.org [kernelnotes.org] has changelists and things but hasn't been updated for 2.2.14 yet.

  • It's good for a home system, average for a server. It's almost painful to have to reboot a box with over 110 days of update just to put it on a UPS...
  • Measure the uptime in terms of processor speed, so having a faster processor so you can reboot faster is less of an advantage...
  • AGP is supported in the stable kernel since many moons, my friend - why don't you check it again.
  • I agree that you should never perform a necessary upgrade just to preserve an uptime value, but I strongly disagree with your claim that a long uptime is meaningless.

    A long uptime on an active system is proof that the system doesn't have significant resource leaks. The same logic applies to systems that don't require frequent disk "defragmentation", etc.

    Resource leaks, by themselves, aren't dangerous other than the fact that they force you to reboot the system to recover the lost resources... but they are excellent indicators of the overall quality of the software. In my experience, all program with significant resource leaks have *always* had an unusually large number of other bugs... and the times that someone has tried to eliminate the leaks just to shut up their noisy boss they ended up fixing a large number of unrelated bugs.

    Note I did not say that they "found" those bugs - many of the bugs were due to wild pointers that simply disappeared once the programmer took care to properly manage their resources. IMHO, the second most powerful bug-finding strategy, after fixing all warnings issued by the compiler, is elimination of resource leaks.
  • 1 44 days, 04:36:04 | Linux 2.2.9 Sat Aug 21 18:58:10 1999
    -> 2 39 days, 20:21:28 | Linux 2.2.13 Thu Nov 25 19:50:22 1999

    So, you see, I am a mere 5 days from beating my 2.2.9 record (at which point I rebooted to use a upgraded kernel), when they go and release 2.2.14. Those bastards.

    Ah well, since devfs for 2.2.14 isn't out yet (AFAIK), it'll be a while before I go and upgrade :-)
    (Linus PLEASE put Devfs in 2.4)
    ---
  • Up that number by five or so. (We've got at least that many /. readers here) I've got two of them thirty feet from me. I won't get to play with them as intimatly as I'd like to, (I'm the 'low-end' guy, and we're only holding them for resale) but they won't leave without a 'Runs Linux' sticker over the shrink wrap.

    I really wish someone would port Linux to one of the older rev AS/400's.. I've had a few chances to snag one off of a scrap deal, but w/ no OS they're useless to me..
  • I've got a couple of boxes with MediaGX processors that I bought for $50 a piece...not as good of a deal as I'd originally thought considerring how f*cked up they are. I know Alan Cox has done some work with MediaGX processors to deal with some of the problems. One of the Linux Grrls [linuxgrrls.org] gave me a pretty specific explanation of the problem (Thanks Kira :) Now I'm wondering if there will be better support in the 2.4 kernel.

    The 2.2 kernel boots and runs fine until I try to do something radical like startx. 2.3 however, detects that it's a MediaGX at boot up, but locks up after 'checking the hlt instruction.'

    numb
  • Wow, I can't believe that Tom Christiansen actually posted stuff about this.

    If you'll follow the link that he gave to Computer, you'll see that Dr. Daniel Cooke is listed as one of the guys that wrote the article interviewing Ken. As I said in a previous post, Dr. Cooke received permission from Computer and Bell Labs to show SPUUG (South Plains Unix User Group; Texas Tech's little group that I coordinate) an unreleased video of him interviewing Ken in person. Alot of different material than that in the printed version.

    Also, Dr. Cooke mentioned to me several times that he received QUITE a bit of flame from the Linux community when that stuff that Ken said was printed, even though he was just the one who wrote it up for Computer.

    Again, I can't believe Tom actually posted about this stuff. Wow. Let me just say that backward: woW.

    -Will the Chill
  • Sure :)
    Course not, I just typed it because I thougt you guys wouldn't know what Holland is if I didn't mention tulips and wooden shoos :)
    If you ever visit the Netherlands, don't be surprised to see 0.000001% of the population with wooden shoos and 500 others types of flowers in the flower-shops :)
    I've never really tasted flowers, but some people say they are great. I personally prefer steak with french fries ;-)
    Thank you for your reply, it now is perfectly clear to me that Americans (or wherever you're from) just beleave anything a foreigner says ;-)
  • One very important thing in 2.2.14 is telephony support.

    This could be an area were Linux could really shine. Telephony is all about reliability and can be very price sensitive, especially when you are trying to put together systems for small companies.

    I should know, I work for a company producing exactly these kind of systems, unfortunately on NT :-(

    For more details see this article [linuxtelephony.org] on LinuxTelephony.
  • from the way i see the linux kernel, as long as your video card works good for u, the kernel doesn't need to mess with the video card alot, leave that to the grahics libraries to do, that's why u don't see much of anything in the kernel configuration about graphics, about the only thing i think the kernel might have to do with agp is provide access to that stuff to other progs, but isn't htat why there's graphics libraries?? i'm no kernel or linux expert tho, so this is just what i think, more than likely not the real facts.
  • What is sad is that sometimes, when our gurus age or get older, the shine on their accomplishments grows dull with their new, wrong headed ideas.

    Look at Bobby Fischer's anti-semitism, or Einstein's belief in the solid state universe, etc.

    Someday Linus will be 50 or 60 and he'll say that the hot new idea is a piece of crap.

    It seems to me that people are first anonymous, then we find out about them from some great accomplishment(s), and then they become yesterday's news when the environment that created them changes.

    If his comments about Linux being worse as a firewall than Windows are actually attributable to him, then it is obvious, at least on this topic, he is walking, breathing anachronism.

  • When does AGP get put into the stable kernel? and is the unstable one any good. I WANT MY WEBCAM =)
  • by hensley ( 275 ) on Tuesday January 04, 2000 @01:39PM (#1405253) Homepage
    Two ways:



    cd into /usr/src/linux

    patch -p1 (for both two patches)


    or, easier and automated:

    cd into /usr/src (or wherever your patches are)

    /usr/src/linux/scripts/patch-kernel to apply all patches in the current directory)
  • by strredwolf ( 532 ) on Tuesday January 04, 2000 @11:25AM (#1405254) Homepage Journal
    us.kernel.org doesn't have it, but tux.org does. It's a 1.3 meg patch.

    ---
    Another non-functioning site was "uncertainty.microsoft.com." The purpose of that site was not known. -- MSNBC 10-26-1999 on MS crack
  • I've said it before, and I'll say it again; what we need in order to put a stop to this whole stupid argument for all time is a writeable /proc/uptime.

    Let people fudge their damn uptime and all the BS will stop.
  • by tao ( 10867 ) on Tuesday January 04, 2000 @11:31AM (#1405256) Homepage
    I'd wager that *IF* AGP gets into v2.2.xx, it'll be for a fairly late one. The code isn't fully finished in v2.3.xx if I'm not all wrong.

    When it comes to the experimental patches, everything depend on the hardware you have. I've been running v2.3.xx kernels on my trustworthy IBM PS/2 9556slc2 (SCSI) without troubles (apart from having to patch the ibmmca.c scsi-driver in an ugly way) whatsoever, others report lots of trouble.

    v2.3 is good, no doubt about that; lots of new interesting stuff, but it isn't feature-complete; lots of stuff that works in v2.2 is broken for v2.3, such as some of the filesystems, etc (but as long as ext2 and fat works, I'm all happy), and the pre-patches are sometimes hard to get to compile.

    Finally, more often than not, at least some platform is broken. Sparc seem to have had most problems, but they were fixed up just a while ago, if I'm not all wrong.

    If you have important data on your disk or need 24/7 uptime, use v2.2.14. If you don't fall into any of those categories, and have any hardware not supported in v2.2.xx, try out v2.3; it'll be a nice experience, and we need all the bug- testers we can find.

    Good luck!

  • by tao ( 10867 ) on Tuesday January 04, 2000 @11:37AM (#1405257) Homepage
    There has been lots of fixing of wicked corruption going on for v2.2.14; most of it were very special-cases, but some might have affected larger user-masses. I suggest you upgrade to v2.2.14; it's been tested for a loooong time now, and it seems really stable.

    What you should remember is, that if you suspect file-corruption, please send a bug-report and a careful description (hardware, kernel-version, etc.) to linux-kernel@vger.rutgers.edu. More often than not, such reports can be of great help to us when we try to find bugs. This of course applies to other kernel-related bugs too (hangs, etc.)

  • by tao ( 10867 ) on Tuesday January 04, 2000 @12:12PM (#1405258) Homepage
    Well, if this bug repeats itself for v2.2.14, you should submit another report. However, even if you send a report, always carbon-copy to LKML (Linux-Kernel Mailing List); not every bug that seem to relate to one thing have to, and sometimes the maintainer is away, misses one or two messages in his (her?) inbox or whatever.

    Remember that the stream of mail to most kernel- hackers is huge, so if you just send them personal mail, simply procmail it away. You never know for sure. Always send to linux-kernel@vger.rutgers.edu (My, this must be the 3rd or 4th time I write that eddy here on Slashdot today...) as well. Others may be experiencing the same problems.

    Remember, without bug-reports we don't know about the bugs...

  • by Rayban ( 13436 ) on Tuesday January 04, 2000 @11:27AM (#1405259) Homepage
    The changelist will be appearing here [kernelnotes.org] at some point in the future.

    Hopefully soon :)
  • The kernel is so out of date that any random script kiddie can grab an exploit or buffer overflow from bugtraq and root the system, obviously not a Good Thing if your computer is running any sort of critical task.

    I think you missed something important. remote and local exploits come from userland programs. bind, pop3d, etc. The kernel might have DOS problems, but AKAIK there are no remote root exploits for the linux kernel itself.

  • by Leos Bitto ( 37008 ) on Tuesday January 04, 2000 @03:36PM (#1405261)
    Yes, NFS is better in 2.2.14. That's why I am running 2.2.14pre14 on our production boxes - I rely on NFS much.
  • by spinkham ( 56603 ) on Tuesday January 04, 2000 @11:33AM (#1405262)
    From what I have read, 2.2.13 fixed all that pretty well.
    I haven't followed up on that recently though, so I may be wrong..
  • by Inoshiro ( 71693 ) on Tuesday January 04, 2000 @11:58AM (#1405263) Homepage
    2.2.13 had odd IDE corruption issues. I like having to not worry about my IDE drive corruptioning itself...

    On another note, they also made the memory buffer hash table more efficient. I also like speed :-)
    ---
  • by Will the Chill ( 78436 ) on Tuesday January 04, 2000 @11:39AM (#1405264) Homepage
    It's called respect. I have a very high amount of respect for Rob, and others like him. The very simple fact that many geeks today aren't able to find suitable role-models in their everyday lives will lend this argument even more credibility. I will accept, to a certain extent, Rob's posts to be pretty authoratative. I've read /. for years, and am able to honestly say that I agree w/ pretty much everything the guy posts. Is it so bad that I happen to share roughly the same opinions w/ someone who is substantially more noteworthy than myself? It's not always about being a follower, you know...

    -Will the Chill
  • by billh ( 85947 ) on Tuesday January 04, 2000 @11:52AM (#1405265)
    Uptime is just that: a measure of how much time has elapsed since the last reboot of the system. It does not measure any of the following things:

    -Superiority of an operating system

    -Ability to administer a computer

    -Programing skill

    -"Eliteness/coolness"

    Let us take this point by point:

    Superiority - You are correct. I've had Windows NT and even 95 boxes up for months at a time.

    Administration ability - depends on the circumstances. I have a colocated web server that I have been working on quite heavily since I installed it, and I haven't been within 30 miles of it since it was turned on 50 days ago. Uptime is currently at 50 days.

    Programming skill - has nothing to do with uptime

    Eliteness/coolness - while not quite the same thing, I am very close to closing a business deal with someone that I have been trying to get to sign on with me for months. In the end, it was the uptime that mattered. Or, more specifically, the fact that the machine didn't flinch during a live load test (real content, real users, no simulations) with this person present. The uptime is like a victory -- you can point to it frequently, and then show someone your logs to prove that your machine can do what you say.

    Uptime == bragging rights in some circumstances.

  • by Wizard of OS ( 111213 ) on Tuesday January 04, 2000 @11:22AM (#1405266)
    Ok, I admit, I submitted. It could be the fact that I'm from Holland (nope, not Michigan, just that little country somewhere in europe where they wear wooden shoes and eat tulips ;) and I don't understand the expression.
    AFAIK my uncle doesn't even know where the powerswich of his son's computer is, so I don't think he submitted a post about a new kernel ;)
  • by RainBrot ( 128651 ) on Tuesday January 04, 2000 @03:27PM (#1405267)
    How many Linux kernel bugs have there been that allowed users to gain root access? How many were fixed between 2.2.13 and 2.2.14?

    Some high-availability (am I using the right term there?) systems actually have uptime requirements (such as "we can only be offline for ten minutes every month") that make it risky to upgrade with every new kernel. Particularly since new kernels can introduce new bugs.

    My point is that it can be irresponsible to upgrade without knowing what the upgrade does, just as it can be irresponsible to not upgrade.

    All that aside, not everyone is running mission critical servers. Some people use their computers for fun, and long uptimes can be a source of amusement.

    I personally have two systems with long uptimes, and I will not be upgrading them. They're non-critical systems, and not worth messing with. Besides, I like to see how long it's been since the last power failure. :)
  • by Millennium ( 2451 ) on Tuesday January 04, 2000 @01:06PM (#1405268)
    I remember Alan saying at one point that he was considering adding the current ext3 sources into the kernel. Anyone know if he's done this yet, or will that be going into the 2.3 tree?
  • by JoeBuck ( 7947 ) on Tuesday January 04, 2000 @01:40PM (#1405269) Homepage
    For machines behind firewalls that are performing some task without any problems, it's best in many cases to just leave 'em alone and let them rack up the uptime.

    On the other hand, for a visible machine with a static IP address, hosting web pages or other advertised services, you have to keep ahead of the script kiddies. But not all machines are in that category, far from it.

  • by tao ( 10867 ) on Tuesday January 04, 2000 @12:03PM (#1405270) Homepage
    Something that obviously passed many by is that v2.2.14 introduces a new platform; IBM's mainframe series of computers; S/370 and S/390. While it's hard improbably that more than maybe ten or fifteen readers of Slashdot even have seen one (I have; we got an offer to get one, but had no IPI-3 disks for it, and no OS; at that time the port didn't exist yet...), but they are basically very different and cool machines.

    Have a look at IBM's homepage and search around for some information on them. They have BANDWIDTH.

    This is at least cool, if nothing else. Now if just anyone could port Linux to VAX, things would be chilling.

  • by delld ( 29350 ) on Tuesday January 04, 2000 @11:59AM (#1405271)
    Now that win2000 is supposedly comming out, and it supposedly needs fewer reboots, Linux uptime counters are going to have much more competition. Therefore, I call for hot-swapable kernels! I do not want to stop what I am doing, just to upgrade (or down-grade) my operating system! I want an uptime measured in decades!
  • by coyote-san ( 38515 ) on Tuesday January 04, 2000 @03:15PM (#1405272)
    Old kernels are still important, for several reasons:

    1) they are well tested
    2) the C library for that kernel is well tested
    3) the programs for that library are well tested

    the importance of this can't be overemphasized. There are a lot of situations where it's much more important to work with a known quantity than to get the ultimate bit of performance or flexibility.

    It's worth noting that one of the most damning complaints against Microsoft as an "enterprise class" OS & application suite is the fact that they have repeatedly demonstrated a cavalier attitude towards making big changes in a way that forces users to upgrade everything to fix a single bug in the kernel (e.g., Win95->Win98) or application (e.g., Office file formats).

    That's why Linux, and all real enterprise-ready OSes, allow fairly independent maintenance paths for all major versions of the kernels/libraries/applications. It's a bit more work for the developer, but it's criticial when you're talking about systems which *must* remain up. (E.g., if a hospital's systems go down due to an unexpected bug in an upgraded OS, patients may die. If an airline's systems go down due to an unexpected bug, they can lose millions of dollars in lost bookings and contractual penalties for delays.)
  • by Eimi Metamorphoumai ( 18738 ) on Tuesday January 04, 2000 @11:57AM (#1405273) Homepage
    Rob said so. If Rob told you to jump off a bridge would you do it?

    I wouldn't be able to get anywhere near it. It would be /.'d to capacity. A total of maybe a foot difference between the height of the bridge and the pile of geeks next to it.

  • by mwillis ( 21215 ) on Tuesday January 04, 2000 @12:08PM (#1405274) Homepage
    FYI - if you want the changelog for 2.2.14, just look at the last 2.2.pre14 kernel changelogs. Linuxtoday has a copy here:
    [linuxtoday.com]
    http://linuxtoday.com/story.php3?sn=14481

    It is a fairly long list of things. The S/390 port is there. Some nice-sounding bugfixes are there, so I'll probably recompile tonight. Also, supposedly it should now compile fine with gcc 2.95.
  • I disagree with you; my experience is that I only need to reboot if something goes terribly wrong or if I want to upgrade a `core' part of the system. Therefore one can say that operating systems with an average downtime that is rather low either are upgraded a lot, or crash a lot. I think the latter has the greatest influence still.

    Off course not all systems run under the same conditions; windows computers are probably more often turned off at night than VMS systems, SunOS is usually used on high-end hardware while Linux often runs on crappy hardware and OpenBSD-systems probably have better admins than Linux-systems (no offense, but most unix-newbies tend to use Linux, not *BSD). But still I dare say that the uptime is a real good measurement for the stability of an operating system.

    Apart from that I agree with the fact that one should not fail to upgrade because one wants to get the highest uptime possible. On the other hand, people shouldn't upgrade when there's no need to; if there are no new features/fixes in the new kernel which apply to your system, don't upgrade :)

    Check http://www.uptimes.net [uptimes.net] for a list of uptimes per OS. There are about 500 hosts in the list, so it ought to give a rather clear view of the situation.

  • by Tom Christiansen ( 54829 ) <tchrist@perl.com> on Tuesday January 04, 2000 @12:53PM (#1405276) Homepage
    Isn't he the one who says that Linux is a piece of shit? Sounds like a great Slashdot role model to me!
    Ken *invented* most of what you know as Unix and C. (It's fun to watch him and Dennis both disavow ownership and point at each other. :-) Without Ken, we wouldn't have Unix, and we probably wouldn't have C. And we most certainly wouldn't have Linux. If Ken said this, then I'm completely certain that he could have backed it up. But I don't recall having read anything by him that referred to Linux so scatologically. Please don't spread gossip and rumor, allowing idle speculation to blossom into bitter invective against a man hte likes of whose genius you seldom meet in one lifetime. Always get the exact quote and context.

    [...time passes...]

    Alright, here you go. Read this, which I got from IEEE Computer Magazine [computer.org]:

    Computer: In a sense, Linux is following in this tradition. Any thoughts on this phenomenon?

    Thompson: I view Linux as something that's not Microsoft-a backlash against Microsoft, no more and no less. I don't think it will be very successful in the long run. I've looked at the source and there are pieces that are good and pieces that are not. A whole bunch of random people have contributed to this source, and the quality varies drastically.

    My experience and some of my friends' experience is that Linux is quite unreliable. Microsoft is really unreliable but Linux is worse. In a non-PC environment, it just won't hold up. If you're using it on a single box, that's one thing. But if you want to use Linux in firewalls, gateways, embedded systems, and so on, it has a long way to go.

    Delving deeper, we have this article [linuxtoday.com] by Eric Raymond in Linux Today, in which he clarifies what Ken said, as follows:
    The best news, I guess, is that Ken says he didn't intend to write off Linux itself as simply an anti-Microsoft backlash; what he was trying to say was that he believes the recent popularity of Linux in the press is an anything-but-Microsoft phenomenon. He adds ``i very much appreciate the chance to look at available code when i am faced with the task of interfacing to some nightmare piece of hardware'' and that ``i think the open software movement (and linux in particular) is laudable.''

    Ken further adds ``i dont see eye-to-eye with microsoft's business practices.'' His original language was rather stronger and more entertaining, but he asked me not to quote that in order to avoid giving Lucent's lawyers heart failure.

    The bad news is that Ken still thinks Linux is flaky. I offered to have VA Linux Labs ship him a machine so he could see what a properly tuned modern Linux looks like, but he said he couldn't accept. He adds ``i do believe that in a race, it is naive to think linux has a hope of making a dent against microsoft starting from way behind with a fraction of the resources and amateur labor. (i feel the same about unix.)''

    I cited all the case studies and trend curves and statistics you'd expect me to. He didn't respond directly to those, but I hope I at least gave him some things to think about.

    Ken did finish by saying ``i must say the linux community is a lot nicer than the unix community. a negative comment on unix would warrent death threats. with linux, it is like stirring up a nest of butterflies.'' (Hm. Butterfly T-shirts, anyone?)

    The really bad news, of course, is that Ken was wrong about the volatile and irrational reaction by the members of the Linux community against those who cast aspersions on the current state of apotheosis of Linux--or of the FSF, for that matter. This kind of thing most certainly does happen, as all here can doubtless attest. So much for the good old days.
  • by Signail11 ( 123143 ) on Tuesday January 04, 2000 @11:29AM (#1405277)
    "If your uptime isn't to sacred to ya, it may be worth upgrading."

    Uptime should *never* be sacred to any computer user in the sense that preserving a high uptime should not preclude one from installing a neccessary software or hardward upgrade. What is important is that an operating system has the ability to run stably and for extended periods of time such that the use of the computer is not impaired. I've known quite a few users who claim ridiculously high uptimes (ie. > 1 year). The kernel is so out of date that any random script kiddie can grab an exploit or buffer overflow from bugtraq and root the system, obviously not a Good Thing if your computer is running any sort of critical task.

    Uptime is just that: a measure of how much time has elapsed since the last reboot of the system. It does not measure any of the following things:
    -Superiority of an operating system
    -Ability to administer a computer
    -Programing skill
    -"Eliteness/coolness", whatever that is

...there can be no public or private virtue unless the foundation of action is the practice of truth. - George Jacob Holyoake

Working...