Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Linux 2.4.16 Released 317

tekniklr writes: "They just released Kernel 2.4.16. Download it here, and you can read the changelog here. This hopefully fixes the error that 2.4.15 had of corrupting filesystems on unmount." Update: 11/26 14:14 GMT by T : p.s. Don't forget to look in the mirrors.
This discussion has been archived. No new comments can be posted.

Linux 2.4.16 Released

Comments Filter:
  • Linking (Score:5, Informative)

    by jeriqo ( 530691 ) <jeriqo @ u n isson.org> on Monday November 26, 2001 @09:08AM (#2613008)
    Current bandwidth utilization 96.75 Mbit/s

    Out of 100mbps..

    Linking directly to the .tar.gz from the slashdot homepage was not a good idea, timothy.

    You should have pointed to the mirrors [kernel.org], instead:
    • They should've linked to the .tar.bz2 ;)
      • Re:Linking (Score:2, Informative)

        by peloy ( 26438 )
        Actually they should be linking to the patch (patch-2.4.16.bz2) rather than to the full tarball.
    • Re:Linking (Score:3, Interesting)

      by Draoi ( 99421 )
      ..... considering that the patch [kernel.org] is less than 6KB. This has to be a record for the smallest kernel release increment yet! (How many people out there are opting to d/l the whole 26MB package 8-b )

      Pete C
      • Re:Linking (Score:5, Funny)

        by jeriqo ( 530691 ) <jeriqo @ u n isson.org> on Monday November 26, 2001 @09:55AM (#2613185)
        Considering that the people who downloaded the 2.4.15 kernel got their file-system crashed, i guess they will have to re-download the whole 26MB package :)

        -J
      • Re:Linking (Score:2, Informative)

        by psamuels ( 64397 )
        ..... considering that the patch is less than 6KB.

        Pedantically speaking, the patch is 17330 bytes long. It compresses to under 6KB.

        This has to be a record for the smallest kernel release increment yet!

        Actually that would be 1.0.9 [kernel.org], at 2678 uncompressed bytes.

        (Not counting pre-1.0 releases, or -pre* releases, or 2.3.0 or 2.5.0 which are just version number changes.)

    • How about making the main site HTTP instead of FTP. Then check the referrer tag. If it's Slashdot.org, or fark.com, or K5, whatever, redirect the user to the mirrors page.

      Hell, if the referrer is anything except the mirrors page, refer the user to the mirrors page.
  • I am thankful... (Score:2, Insightful)

    by Blob Pet ( 86206 )
    for those who are brave enough to immediately try out fresh kernels that may break one's system so I don't have to - and for those responsible for putting the fix out so quickly.
  • by scorcherer ( 325559 ) on Monday November 26, 2001 @09:16AM (#2613037) Homepage
    2^4 = 16
    • 2^4 = 16

      And this one is an even number, they are supposed to be stable.

      2.2.x --> stable
      2.3.x-> development
      2.4.odd --> seems to heve unexpected bugs.
      2.4.even --> might be stable. who knows?

      2.5.0 --> unstable! it had to be. now everyone who said that 2.5.0 would be the last 2.odd stable one will be proven wrong.

      Didn't this have to to do with the odd and even numbers of the start trek movies 8-). Or don't you think this is funny after downloading 2.15 just a few hours ago and syncing/fscking like hell now?
  • by Griim ( 8798 ) on Monday November 26, 2001 @09:23AM (#2613066) Homepage
    I've been following all the kernel releses, and their bugs. I was just curious, what is the best way to tell which kernel is currently the most stable, without jumping immediately to the latest release? Obviously there is no way of knowing if it is, without it being out there for at least a couple of weeks.

    I was hoping that kernel.org or somewhere would list what is currently the most stable. I know that from roughly 2.4.5 through to 2.4.11 or so suffer from some sort of swapping/memory leak, I can't remember. This is just from loosely following what has been posted to slashdot in the past few weeks.

    Is there any resource tracking for this? What is the most stable of the latest kernels?
    • > What is the most stable of the latest kernels?

      IMO the most stable kernel release is 2.2.20. Some people say that 2.4 is still testing, not stable.
    • by Snowfox ( 34467 ) <snowfox@@@snowfox...net> on Monday November 26, 2001 @09:57AM (#2613194) Homepage
      I've been following all the kernel releses, and their bugs. I was just curious, what is the best way to tell which kernel is currently the most stable, without jumping immediately to the latest release? Obviously there is no way of knowing if it is, without it being out there for at least a couple of weeks.

      First of all, unless you've got some very specific requirements only satisfied by a 2.4 series kernel, if you're worried about stability then you should be running a 2.2 series kernel.

      That said, if you must track 2.4, then you're best off tracking the changelogs and only upgrading when you see a fix for a problem likely to affect you. If the problem is minor, consider giving the new version a little time. There are enough version whores and neozealots out there that other people with gladly rush out and do the mine stomping for you.

    • The very best way to keep up on what is happening with the kernel is to read the Linux Kernel Mailing List. Here is info from the LKML:
      To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
      • "Best" is an awfully loaded term.

        For most people, all that this will do is cause them a flood of email about minutiae for which they have no context.

        I would rate Kernel Traffic [zork.net] much higher on the "best way to keep up with what's happening in the kernel" scale for anybody who's not actually contributing code to the kernel. Even experienced C coders, if they aren't ass-deep in the kernel for other reasons.
    • by ajs ( 35943 ) <ajs@ajs . c om> on Monday November 26, 2001 @10:28AM (#2613366) Homepage Journal
      What's the best way to tell which kernel is best? Run it for about 2 months on a wide variety of hardware, with a wide variety of software loads. Record incidents and map those against known problems, apply available patches for those that will impact you the most. Re-test.

      Then again, your distribution vendor already does this, so why would you be grabbing the latest development release (don't let the term "stable" fool you, that refers to interfaces, not field performance)? Red Hat is now up to 2.4.9 [redhat.com] . I know that there's a lot of work going on in the VM world, and it seems to have been sorted out, but as you are noticing, there are other things in the kernel besides VM. If you want a kernel whose performance charactaristics are known, and whose primary bugs have been addressed, you have to sacrifice bleeding-edge fixes.

      Not an easy pill for the "I want my tarball now!" world of Open Source, is it? Look on the bright side, 2.4.9 updates from Red Hat on 11/2 beats the heck out of the too-little-too-late geological updates from any closed-source proprietary OS vendor. Q/A is hard work and cannot happen in zero-time.

      • What's the best way to tell which kernel is best? Run it for about 2 months on a wide variety of hardware, with a wide variety of software loads. Record incidents and map those against known problems, apply available patches for those that will impact you the most. Re-test.

        Sound advice, and I'm certainly glad to know that the big distributors of Linux do testing like this.

        In the long haul, however, I'd feel more comfortable if there were something open, free and distributed that accomplished the same thing. Just in case any of those good testers at RH, SuSE, Mandrake, Caldera, Debian ... move on from testing things on really weird old hardware combinations, like the kinds you might find in schools or in the third world, for example.

        Something like a database with motherboard, chipset, CPU, peripherals, kernel version alongside uptime and perhaps some rudimentary performance figures. Each user could contribute an entry to the database so that a very rapid feedback mechanism would be available to kernel hackers due to the size of the user base reporting in a methodical way.

        A more organized system would sure beat the anecdotal empirical approach of

        "Try this patch. - Works for me. - Wait, doesn't work for me!"
        • In the long haul, however, I'd feel more comfortable if there were something open, free and distributed that accomplished the same thing. Just in case any of those good testers at RH, SuSE, Mandrake, Caldera, Debian [...]
          Umm... since when is Debian not open, free and distributed? Did they get bought by IBM or something?!

          Sure, what you propose would be a great adjuct to what Debian does now (perhaps they already do, I'm not much of a Debian guy 'cause I've never really had the time).

          • Umm... since when is Debian not open, free and distributed?

            Sorry. Quite right - since never has Debian been closed.

            I didn't mean to imply they were closed.

            I only included them in an incomplete list of known distributors of Linux and GNU that might do some testing as part of their release process.

            My main point, obscured by my poor capacity for expressing coherent ideas, was to advocate the establishment of a formal open database that provides functional, benchmark and performance information about different flavors of the kernel in combination with different flavors of hardware.

            Along the same lines, a deliberately heterogeneous Beowulf cluster might be useful for testing kernel versions to see the impact of proposed patches and changes.

            Too often I hear kernel developers lapse into arguments about VM schemes, etc. where the arguments cannot be resolved because they depend upon actual empirical data that the developers do not yet have!

            Yes, there might be benefits for a new scheme under some circumstances and drawbacks for the same scheme under other circumstances. But we won't know until testing on various hardware with typical application suite combinations if the ratio of advantageous/disadvantageous is 95/5 or 5/95.

            <operativeword>Imagine</operativeword> data something along the lines of:

            A 386 with 12 MB with two ISA Ethernet cards had its NAT performance improve by 10% from 2.4.3 to 2.4.4

            A dual PIII running Apache with KDE user apps starting and stopping had 15% decreased throughput after 2 hours of uptime after applying the foobar VM patch to 2.4.8

            You get the idea.


    • I have a machine at work running 2.4.5 that has been running rock solid for 157 days. Granted, it's just a workstation that runs GIMP and other handy programs. However, it does run xaos while I'm not using it (so it has been at 100% CPU for most of that time)! It is still very responsive after all this time. PPro 200s are great machines...
    • I found this [ramdown.com] web site which shows the bugs which are in the 2.4.x kernels.
    • this is probably too late to do any good, but....
      the stats at the Linux Counter [li.org] show that the most popular 2.4 kernel is 2.4.12 (144 boxes), closely followed by 2.4.9 (126), 2.4.13 (116) and 2.4.14 (110).
      The average uptime of 2.4.0 boxes is higher than for anything else in the 2.4 series (46.5 days), but this is very much a reflection of the days since release, too!
  • Now I'll have to wait for patch-2.4.16-to-2.5.0.bz2! Maybe next week...

  • Although I like to be as "leading edge" as everyone else, I've held back on migrating to the 2.4 kernel because of the sorts of things that have been happening to this release.

    Although the 2.4 kernel seems to be overall a major step forward from the 2.2 kernel, there have been too many major changes with too little testing to make it a 'stable' kernel yet. It was only a couple of 'mod levels' ago that the VM was entirely rewritten to fix a performance problem that the original 2.4 VM (rewritten from 2.2) introduced. And, the 2.4 kernel (finally having been pronounced 'stable' by the kernel team) is discovered to have a major file corruption problem (now, apparently fixed in the +1 mod).

    Not to disparage the kernel team (whom I think have done a wonderful job in giving us the next generation kernel), but I think I'll wait until this 'stable' kernel stabilizes a little more.

    • And this is why the 2.4 kernel has had such a rough time. Please see the post on the linux kernel mailing list that has message id Pine.LNX.4.33.0111251946400.9764-100000@penguin.tr ansmeta.com

      In it, Linus clearly states what the problem is with any major release... The people you really want to test it won't test it when its development.

      The number of people testing a development release is sadly too small to catch some of the problems. The same is true in a lesser degree to -pre releases in regards to the final releases.

      At any rate, if you really want to help, setup a test box and test the development releases and provide useful feedback, then the time to a really stable release will decrease.
  • by mosch ( 204 )
    Okay, 2.4.15 was supposed to be "enterprise quality and bug-free", but it couldn't unmount filesystems without destroying them and now we're all supposed to go upgrade to the latest poorly tested kernel? What the fuck?

    It's time to admit that most people don't need the newest kernel, and should just run whatever their favorite distro has properly tested. Unless you enjoy pain and you have no data of consequence, chasing kernel versions is a losing proposition.

    • ...but it couldn't unmount filesystems without destroying them...

      Ok, that's just plain false. The worst that could happen is you get a few stale lock files left over (sometimes a bunch) that are undeleteable, and can only be erased with a fsck. Not a huge deal. The problem is that when you unmounted a filesystem, if there was data that still needed to be synced, it could get garbled. No big deal. Do an fsck, and everything is restored. No offense, but did you even bother to read about the bug?
    • OK, I'm pretty much a newbie here. I have been on UNIX for a few years, but only a few weeks ago got a root password to my own box. I've been following Linux for some time, though, because I'm a free software idealist.

      Now, I've got Red Hat 7.2 on my machine, running the 2.4.7-10 kernel that came with the distro. All my partitions are ext3, and that's why I need a pretty recent kernel. Since ext3 was accepted by Linus in his tree, I figured I should upgrade, and indeed, I rushed to upgrade to 2.5.0 (cool, eh!) the minute it was released. Well, I got my file systems down apparently undamaged.

      So, when you're saying

      It's time to admit that most people don't need the newest kernel, and should just run whatever their favorite distro has properly tested.

      ...you're saying that I shouldn't pick up the latest version to get Linus tree, but run the 2.4.7-10 kernel that came with RH7.2? It does pretty much everything I need, I must admit. USB support isn't compiled in by default, nor is frame-buffer-devices (?), but then, I have 4 USB ports but no USB gadgets, and I'm just writing a thesis on this box right now, so I don't need any fancy graphics.

      I'm happy for any advice I can get! :-)

      • Well, I got my file systems down apparently undamaged.

        Are you sure? Did you force a fsck? Unless you force a fsck, fsck won't notice there's anything wrong when you reboot.

  • We're awaiting x>y (in 2.4.x and 2.2.y).
  • I looked at the 2.4.15 and 2.4.16 changelogs but I cannot understand if they fixed whatever problem happens with the disk cache. I often find myself with 2^32-1 Kb of RAM devoted to cache, which has some interesting results.... If you just ignore it goes away after a bit, so it's probably a counter somewhere which underflows.
    But it's certainly fun, have you ever seen bubblemon turn pink? Or blood-red? :)
    • Ok your problem is easy. That happens with the _new_ VM introduced in 2.4.10, with the _old_ ext3 patch. So I am assuming you used 2.4.10 - 2.4.14, and you applied the ext3 patch. This is a harmless reporting bug. When the ext3 patch was merged into mainstream it was fixed. Use 2.4.16, it's probably the most stable 2.4 release ever[1] (except for /possibly/ a redhat kernel).

      [1] 2.4.15 would have been the most stable/robust kernel execpt for that inode bug. Looking at the changelog for 2.4.16 one can see that the only real change was the inode bug, and one can make a safe prediction that 2.4.16 will turn out to be the most stable kernel in 2.4 series so far.
  • preemptable patch (Score:3, Informative)

    by MartinG ( 52587 ) on Monday November 26, 2001 @09:29AM (#2613095) Homepage Journal
    For those interested, the preemptable patch [kernel.org] against 2.4.16-pre1 also applies cleanly to 2.4.16 final.
  • Seems that there's always a bug in every new kernel release lately and it either is so major that it warrants switching to a previous kernel lest I suffer catastrophic effects or its minor but it's still something that affects me (such as ntfs or emu10k support).

    I somehow missed the 2.4.15 announcement so fortunately I wasn't hit by any problems (I also missed the 2.4.13 release, dunno how), but even though I normally pop in the newest kernel upon release I'm pondering waiting this one out.
  • Most people should wait a day or so to grab the latest kernel. As I'm finding (most of the US mirrors at least), 2.4.16 hasn't been mirrored to many of the mirrors yet :-)
    • Most people should wait a day or so to grab the latest kernel. As I'm finding (most of the US mirrors at least), 2.4.16 hasn't been mirrored to many of the mirrors yet :-)

      I dunno; the Canadian mirror had 2.4.16 at 10AM (EST)

      [ Reply to This | Pare
  • Why is it that every time this happens we /. the hell out of all the home and mirror sites. It seems to me that the first thing to do would be to get it out on as many distributed file sharing systems as posable. I'm getting this onto all the file sharing programs that I use as soon as the DL finishes. It would be nice if everyone did the same.
  • Re: (Score:2, Interesting)

    Comment removed based on user account deletion
    • Very simple:
      Compile ext3 into your kernel (make sure it's not a module, if you want to use it for your root file system).
      Do a "tune2fs -j /dev/hdaX".
      Reboot.

      That's it.
      The help for the kernel option tells you which version of the ext2progrs you'll need (at least 1.20 ?).
    • by Draoi ( 99421 ) <<moc.cam> <ta> <thcoiard>> on Monday November 26, 2001 @10:13AM (#2613269)
      You need to get the latest e2fsprogs (1.22) and the latest util-linux (2.11). Don't install the
      login utils if you're installing from a source tarball instead of an rpm.

      When done, type "tune2fs -j /dev/hdwhatever". Done! A journal will be created automatically. Remember to only run this on a clean ext2 partition (make sure you're not running 2.4.15! :) ). If you're going to convert over the boot volume, make sure ext3 is built into the kernel and not a module. You shouldn't have to set any particular LILO flags (I didn't & I'm typing this
      on ext3/2.4.16pre1). Update your /etc/fstab to show the new filesystem type.

      Not sure about the Slackware stuff, but I doubt if there are any config file changes.

      Andrew Morton's EXT3 page [zip.com.au] has all the details.
  • Okay, isn't the convention supposed to be that even-numbered middle-dot releases (2.2, 2.4...) were supposed to be stable with the experimental stuff in odd-numbered (2.1, 2.3...)? While 2.4 in general has many nice things about it, the whole thing feels too much like a "2.3" series for my taste. This umount error is just one more example.

    I note that 2.4.x broke my system badly -- it decided (as supplied with both Mandrake 8.1 and RedHat 7.2) that my ATAPI CD-RW was a DMA device, regardless of what I told the BIOS. With ide-scsi loaded over it, mounting caused kernel panic. An extremely helpful person on comp.os.linux.development.system helped me debug it with hdparm. But even building a custom 2.4.13 kernel didn't "solve" the problem (meaning that I have to leave hdparm in place and not use devfs). The kernel README is way, way out of date too. I'd expect this kind of stuff on an odd-numbered series. Perhaps even-numbered kernels need a bit more of a testing stage before release.

    Wouldn't it be strange if 2.5 became the more stable one? At this rate, it could happen.
  • User mode linux? (Score:3, Interesting)

    by whovian ( 107062 ) on Monday November 26, 2001 @10:10AM (#2613258)
    Having just joined the x86 camp, I wondered whether running 2.4.15 within User Mode Linux [sourceforge.net] would have been helpful in this case. For that matter, how large is the actual user-base for UML?
  • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Monday November 26, 2001 @10:13AM (#2613267) Homepage Journal
    This is ONLY a suggestion, not a flame. But could you please make better use of that -pre qualifier? Don't be in such a rush to make releases. Sure, the essence of Free Software is "release early, release often", but that's what the -pre stage is for.


    Kick back, relax, take it easy, and run some automated burn-in tests for the kernel. Releasing code doesn't need to be a strain, or rushed. Remember, you're not doing it for "them". There is no "them", except in Sci-Fi, or paranoid extremist literature. Rushing is a self-inflicted injury. If you need to do self-harm, use a rubber razor-blade or something.


    Many of the major shifts in the kernel have been the Right Thing To Do(tm), but those are the times you need to relax -MORE-, not less. Anyone with a penguin as a mascot understands cool. Cool is good. Cool is exactly what that penguin needs. Cool is what YOU need. You can't run at top gear, indefinitely, and expect to be even close to 100% of your ability.


    As I recall, we went through something in excess of 120 pre-releases for one early kernel, and other early kernels often went through 20-30 pre-releases. (Oh, for the days of using a-z for the pre-release number! Sometimes the kernel fell off the end of z, and I think that was part of the incentive to switch to numbers.)


    When Alan Cox maintained his series, he would often get into the tens, I suspect much for the same reason. A kernel is a complex thing, and the interactions can be hideously obscure. It takes a lot of testing and validating to work even just the worst of these glitches out.


    If we reach 2.5.0-pre100, with the understanding that 2.5.1 will be solid enough to do new work, without forever struggling to figure out if a bug is in new code or a cold kipper from 2.4.x, nobody is going to complain. Well, nobody with any sense. The rest we can secretly smuggle into Afghanistan, where nobody'll care what they think.


    I'd rather see 2.5.1 for Thanksgiving -NEXT- year, than be unable to do any serious development work for it. A solid foundation and a late, but perfect structure, is a billion times better than a sky-scraper made from twigs and built on straw, even if the sky-scraper is built on time.


    You, like anybody, are undoubtably feeling all sorts of pressures - from work, to the family, to the economy, etc. Many of those pressures are bogus. Worrying about job security won't give Transmeta a greater profit. If it itches, scratch it (just be careful what you scratch in public), and if it doesn't, forget about it. You don't need to go creating problems. We have a Government to do that for us.


    None of what I've written is new to you. Little, if any, is probably new to anybody. But it's all stuff we need to hear, from time to time. And when I see someone who is no idiot repeatedly making some very basic coding errors over a relatively short time, I think it's not unreasonable to think that there's a guy who is burning themselves out in the hamster wheel of life, and that that guy might benefit from kicking back & kicking the wheel over. Sometimes we go the furthest by making the least effort.

    • You know what? As Linus posted to LKML[1], it doesn't matter if there are a million pre releases, as long as it's a pre release, most people don't download it and run it on their hardware and workloads. Not to mention the fact that Linus doesn't like to maintain kernels and turns them over to other maintainers (Alan and Marcelo) for maintenance.

      Hence, bugs don't turn up until after real releases are made.

      Anyone who goes out and runs a shiny new kernel on a mission critical machine which was released 20 minutes ago is just asking for trouble. These kernels simply don't get the QA they need to be determined to be stable for a number of days after they're released.

      If you want a QA tested kernel, go to RedHat, Suse or any of the other Linux distributions, shell out whatever they charge for bundling it up and use their kernel. When that kernel breaks, go whine to the distribution maintainers. (I've done this personally with RedHat, and found them to be very responsive to bug reports.)

      Its either that, or fix it yourself, it's that simple. What, you want something for nothing? That's not how free software works.

      Whining about the problem will not fix it. Going out and fixing it yourself, will.

      1. See posts about Linus and maintaining stable kernels here [theaimsgroup.com] and here [theaimsgroup.com].
      • What -I- do to test pre-releases is to run them. Hard. On mission-critical systems, when I can.


        Am I stupid? No. There is no better test of a kernel than a real situation. There never will be. Real Life will always throw up situations that can never be anticipated in the laboratory.


        What else do I do? I compile patches. Pre-releases, new releases, ANY releases. I bundle them together, release them on Sourceforge, and watch the counters fly. You say that nobody would run a pre-release? 400-800 people regularly say otherwise, whenever I upload a new FOLK patch. That is as "pre-" as you can get, yet hundreds of people actually use it!


        I have used Linux since 0.1, the BSD's since William Jolitz first ported the Berkeley tapes to the Intel, and I can tell you this from first-hand experience -- the BSD releases are damn-near rock solid, BECAUSE the people behind them insist on extensive pre-release cycles. HOWEVER, Linux overtook the BSDs within 2 years of coming out, because Linux development was open.


        What I am asking for is to re-merge the two approaches. It's as simple as that. Re-merge? Yes! As I said in the letter, early Linux kernels went through tens, sometimes hundreds, of development itterations, before a release was made.


        "Nobody uses pre-release versions"? Methinks you and he have forgotten that ftp.funet.fi was saturated, every pre-release that was made.


        Sure, Linus can't QA a complete kernel. I wasn't asking him to. I don't even believe in the entire QA philosophy. Stoccastic testing is comparable to throwing darts in a map, in an effort to find gold. You =MIGHT= be lucky, but the odds are that you will miss the bloody obvious many times more.


        To really test a kernel requires exhaustive testing of EVERY function call, under EVERY possible entry condition & state, OR a formal proof, neither of which is terribly practical, whether you're an individual or a distribution manufacturer. Red Hat may be rich, compared to Joe Average, but they still can't afford the 10,000 Ph.D mathematicians they'd need to check a kernel rigorously, in any realistic time-frame.


        So, how do you achieve a decent quality? Easy! You run the program in much more compact cycles. By compacting the software life-cycle, and running many many more itterations, you can produce (in much less time, and for much less money) a quality comparable to having a few gigantic life-cycles of enormous cost.


        Linus know this. He isn't an idiot. If he has to change the versioning, so that there isn't a "pre-" label, but rather a sub-sub version, to get people to run the kernel, then that's what he should do. There is NO excuse for umount() bugs in a 2.ANYTHING kernel. Development, pre, or otherwise. That kind of bug should not exist, even in the darkest imagination, beyond version 0.1

    • actually I think the new maintainer did the right thing this time. He had -pre1 sitting there for about a week letting people hammer at it, and people didn't have any major problems with it, so he released it (with a slight tweak to the 8139too driver to make it compile with gcc 3.0.2).
      • Yes, I'd agree with that. A week, between sub-versions, is probably about right. I'd honestly prefer a sub-sub version number, with one release of that every couple of days or so, no matter how small the changes. Gives people some exercise, and would get people to think about Linux as a rapidly evolving product, rather than a dead weazel that was no so much greased as run over by a 10 tonne truck.
      • He had -pre1 sitting there for about a week letting people hammer at it, and people didn't have any major problems with it, so he released it

        Hammer it, as in trying the most stressful things load-wise (cpu, storage, video etc.). The lesson to be learnt here is that there are other things that must be tested - like the very-rarely occuring reboot.

        Ok, in the real world there are a lot of linux machines that don't run crazy uptimes - like dual or multi-boot machines, with people booting windows to play games or to use m$ office. Give them a confidence boost - that they can use a "stable" kernel from the 2.[even] series without having to reinstall linux. :)
  • by BlueUnderwear ( 73957 ) on Monday November 26, 2001 @10:17AM (#2613295)
    Ok, say you are running 2.4.15 now. You are compiling 2.4.16. And now you want to reboot with the new kernel. But reboot implies unmount, which might trigger the bug! So what would be the safest way to jump off that 2.4.15 timebomb? Is this a situation where just pushing the reset button would be safer than a clean shutdown?

    Would remounting the filesystems read-only help? Or would that also trigger the bug?

    And, if your filesystems are reiserfs, do you need to worry too, or does this only affect the traditional filesystems.

    • init 1, sync then hit the reset button. Boot with your rescue floppy, (you have one right?) and force a fsck on your partitions. Note: The >/forcefsck will NOT work with reiserfsck. You must run reiserfsck manually.

      Rich
    • Before you reboot, do the following:

      touch /forcefsck

      if you have the 'magic sysrq' option enabled, you can use the key-combination 'Alt-PrintScreen-S' (from the console, of course...) to sync the filesystems. Do this a couple of times (with a 1-2 second interval), followed by a couple of 'Alt-PrintScreen-U' (unmounts and remounts read-only all filesystems). When all filesystems have been remounted R/O, use 'Alt-PrintScreen-R' (on some systems only the left Alt-key works for this combo) to reboot the box. The presence of the /forcefsck file should force a fsck on the next boot (it does this through a check in the rc.sysinit script, grep for fsck in this script to see whether your rc.sysinit uses this file or some other mechanism if you're not sure about this).

      If you DON'T have the magic_sysrq option enabled, you can sync(1) a couple of times before rebooting to lower the chance of there being dirty inodes on umount.

      The most important bit is that about forcing a fsck on the next boot, no matter what filesystems you use. This bug affects all filesystems, including ext3 and reiserfs and others.
    • by sfe_software ( 220870 ) on Monday November 26, 2001 @11:21AM (#2613649) Homepage
      From my understanding the bug affects all filesystem types.

      I patched my kernel to 2.4.16-pre1 yesterday in light of this bug, and here's what I did:

      1) Compile kernel using my normal procedure
      2) Switch to single user mode ('init 1')
      3) 'sync' and 'umount' each partition (except /)
      4) sync
      5) shutdown -r -F now

      No corruption, no problems (I'm on ext3 so the forced check wasn't even noticable).

      You might be tempted to remount / read-only first, but if you do, first create '/forcefsck', which is exactly what the -F flag on 'shutdown' would do, but of course only if / was writable.
    • by aussersterne ( 212916 ) on Monday November 26, 2001 @11:38AM (#2613743) Homepage
      0) Make sure you have compiled and installed a patched kernel.

      1) "shutdown now" or "init 1" as root to go single-user.

      2) sync

      3) umount all non-busy filesystems (usually only root is busy for most people).

      4) sync

      5) mount -n -o remount,ro /
      (so now the root filesystem is read-only -- this step *is* important).

      6) e2fsck -f /dev/partiton
      (once for each partition, starting with root [/] device, substitute e2fsck with reiserfsck, etc., as necessary -- force a check on each filesystem)

      7) sync, hit reset

      8) make sure not to ever boot into the buggy kernel again!
  • 2.4.16 and ALSA (Score:5, Informative)

    by pwagland ( 472537 ) on Monday November 26, 2001 @11:13AM (#2613607) Journal
    Well, this was posted for 2.4.15, but it is also relevant for 2.4.16:
    While we are talking about incompatible kernel patches, please be aware that ALSA 0.5.12 does not work under 2.4.15. You need to get the CVS version, as described
    here [geocrawler.com] . ALSA 0.5.12 compiles, but does not work.
  • I'm running 2.4.15. I haven't rebooted since the boot that brought me into 2.4.15, and my disks haven't been unmounted (except for the FAT32 drives; I have a script that unmounts them and remounts RO before running vmware, but this bug shouldn't affect those drives). Since I'll hit this bug when I reboot, I think I'll compile the 2.4.16 kernel but sit on a reboot until a couple of weeks go by to make sure there are no problems....

    You guys beta test and let me know, OK? :P

    -Legion

    • Just take a look at the 2.4.16 changelog. There really weren't that many changes to the kernel, and this bug is a fairly troublesome one. I would only sit on 2.4.15 if I had a UPS and I touched the /forcefsck file in root (you should do that now, anyway).

      There really is no reason NOT to install the new kernel. You probably haven't racked up much uptime anyway, and not that uptime on 2.4.15 is really worth bragging rights anyway.

      Personally, I upgraded when 2.4.16-pre1 came out. I also converted many of my partitions to ext3 (finally). I've been waiting for ext3 to be merged in with stable for a very long time!

      Another improvement that wasn't detailed because of the famous "...merge with Alan..." messages in the ChangeLog was that most of LVM is up to date in the stable kernel now. LVM has been at the 1.0.1rc4 release for some time now, and not having to patch my kernel is pretty nice (although, the LVM crew made creating patches quite simple). If you haven't checked out LVM [sistina.com] yet, do so. It's quite sweet!

  • Perhaps we've got some Klingon Programmers [bazza.com] working on the kernel now.
    8) "What is this talk of 'release'? Klingons do not make software 'releases'. Our software 'escapes' leaving a bloody trail of designers and quality assurance people in it's wake."

"The following is not for the weak of heart or Fundamentalists." -- Dave Barry

Working...