Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Kernel 2.4.11 Released 386

stygian writes: "Linux 2.4.11...need I say more?" Of course you do. You need to point people to the mirrors and changelog, at a minimum.
This discussion has been archived. No new comments can be posted.

Kernel 2.4.11 Released

Comments Filter:
  • Are there any issues known when upgrading older kernel versions?

    Got an old Redhat box with 2.2.16 (IIRC) and would like to bring it up to the latest stable release. Any chance of that being done easily?

    I've done incremental updates before but never major overhauls.

    Moose.

    "That is a valid question, now how about a valid answer." (I forget whom)
    • not an uber geek but I'll give it a try.

      Check the README in the kernel source directory for the list of required software for the 2.4.x series.

      From the kernel version you are using I'd expect to be upgrading a whole lotta stuff

    • Check out Documentation/Changes. You'll probably need to upgrade binutils, modutils, e2fsprogs, and PPP (if you're running PPP). The file has pointers to applicable versions.

      If you're comfortable compiling a kernel, it shouldn't be any trouble.

    • You could always try running the upgrade from a RedHat 7.1 disk. It has worked well for me in upgrading 6.2 boxes. It is also a hell of a lot easier than upgrading all the individual packages. IIRC RH 7.1 ships with the 2.4.2 kernel, an upgrade ro 2.4.x from that is a snap. Of the boxes I've upgraded, some have new, custom kernels and some are still running the stock RH kernel, which seems pretty solid. I did do an upgrade on one of the systems manually (not quite manually, lotta RPMs involved, some compiling) and it took at least 5 times as long as simply running the upgrade from a current CD.
    • Not THAT easily.. you'd most likely need to upgrade critical system tools and utilities like binutils, util-linux, modutils, maybe even gcc. For kernel 2.4.10, these are the needed versions of those and other packages:

      gcc - 2.95.3
      make - 3.77
      binutils - 2.9.1.0.25
      util-linux - 2.10o
      modutils - 2.4.2
      e2fsprogs - 1.19
      ppp - 2.4.0

      The Changes file is more complete though.. read it to know the other changes you might need to make. Oh, and the recommended version of glibc for kernels in the 2.4 series is 2.2.x.. so you might want to upgrade that as well, though it isn't required.
  • changelog (Score:3, Informative)

    by metalhed77 ( 250273 ) <andrewvcNO@SPAMgmail.com> on Tuesday October 09, 2001 @09:45PM (#2409011) Homepage
    here to whore and to reduce stress on the servers!

    final:
    - Jeff Garzik: net driver updates
    - me: symlink attach fix
    - Greg KH: USB update
    - Rui Sousa: emu10k driver update

    pre6:
    - various: fix some module exports uncovered by stricter error checking
    - Urban Widmark: make smbfs use same error define names as samba and win32
    - Greg KH: USB update
    - Tom Rini: MPC8xx ppc update
    - Matthew Wilcox: rd.c page cache flushing fix
    - Richard Gooch: devfs race fix: rwsem for symlinks
    - Björn Wesen: Cris arch update
    - Nikita Danilov: reiserfs cleanup
    - Tim Waugh: parport update
    - Peter Rival: update alpha SMP bootup to match wait_init_idle fixes
    - Trond Myklebust: lockd/grace period fix

    pre5:
    - Keith Owens: module exporting error checking
    - Greg KH: USB update
    - Paul Mackerras: clean up wait_init_idle(), ppc prefetch macros
    - Jan Kara: quota fixes
    - Abraham vd Merwe: agpgart support for Intel 830M
    - Jakub Jelinek: ELF loader cleanups
    - Al Viro: more cleanups
    - David Miller: sparc64 fix, netfilter fixes
    - me: tweak resurrected oom handling

    pre4:
    - Al Viro: separate out superblocks and FS namespaces: fs/super.c fathers
    fs/namespace.c
    - David Woodhouse: large MTD and JFFS[2] update
    - Marcelo Tosatti: resurrect oom handling
    - Hugh Dickins: add_to_swap_cache racefix cleanup
    - Jean Tourrilhes: IrDA update
    - Martin Bligh: support clustered logical APIC for >8 CPU x86 boxes
    - Richard Henderson: alpha update

    pre3:
    - Al Viro: superblock cleanups, partition handling fixes and cleanups
    - Ben Collins: firewire update
    - Jeff Garzik: network driver updates
    - Urban Widmark: smbfs updates
    - Kai Mäkisara: SCSI tape driver update
    - various: embarrassing lack of error checking in ELF loader
    - Neil Brown: md formatting cleanup.

    pre2:
    - me/Al Viro: fix bdget() oops with block device modules that don't
    clean up after they exit
    - Alan Cox: continued merging (drivers, license tags)
    - David Miller: sparc update, network fixes
    - Christoph Hellwig: work around broken drivers that add a gendisk more
    than once
    - Jakub Jelinek: handle more ELF loading special cases
    - Trond Myklebust: NFS client and lockd reclaimer cleanups/fixes
    - Greg KH: USB updates
    - Mikael Pettersson: sparate out local APIC / IO-APIC config options

    pre1:
    - Chris Mason: fix ppp race conditions
    - me: buffers-in-pagecache coherency, buffer.c cleanups
    - Al Viro: block device cleanups/fixes
    - Anton Altaparmakov: NTFS 1.1.20 update
    - Andrea Arcangeli: VM tweaks

  • by cymen ( 8178 ) <cymenvig @ g m a i l . com> on Tuesday October 09, 2001 @09:55PM (#2409046) Homepage
    The Preemptible Kernel patches [tech9.net] can result in a desktop that reacts/feels faster... I'm running it here with 2.4.10 on an Inspiron 4000 laptop and I'd have to say I'm impressed - everything feels a bit zippier. The only problem I've had is that there seems to be some loop that it has optimized that blasts bits around the memory bus at high speeds with a rthymic pattern - in short if I'm in a really quiet room the high pitched busses are a bit noisy... Maybe my hearing is too good!

    Anyway - doesn't look like much changed since pre-6 so the pre-6 patches should work but if you want to be sure you can wait until rml releases the 2.4.11 final patch. I'd recommend checking it out if you have the time...
    • These sound real good. Is there a reason that these patches are not the default behavior? Is there a downside to having a premptible kernel? Everyone that runs these patches says that they are awesome.

      I'm assuming that its not in 2.4 because it probably changes alot of things and needs to be done in 2.5.
    • The PE kernel work looks pretty good, but it's still got some kinks to work out in order to guarantee sub-5ms latencies. In a recent email to alsa-devel, Takashi Iwai posted the following tests with alsa and low-latency [alsa-project.org] versus preemptible kernel [alsa-project.org] patches. In summary, getting better, but not quite there yet.

      I definitely agree with you though, the PE people's work is exciting, and much less of a hack than the low-latency patches. Way to go hackers!

    • by Adam J. Richter ( 17693 ) on Wednesday October 10, 2001 @12:28AM (#2409394)

      For things like playing buffered video and sound, where you just need to get the CPU every few milliseconds, I would think that the system call code paths are not so long that you really need a preemtable kernel. I would expect that it would be enough to just change the time quantum from 1/100th of a second to, say, 1/5000th, by replacing the "#define HZ 100" in include/asm/param.h to "#define HZ 5000". I have not tried this, but this sort of thing has been discussed on the linux-kernel mailing list. One person there reported that doing this prevented his Palm cradle to no longer be able to sync, so be warned that this seems to trip at least one bug.

      As someone who has only looked through the preemtable system call patch and never tried it, my impression is that while it may be great, I expect its design to change a bit. Right now, under this patch, you build the kernel with basically a fixed number of fake CPU's that basically make your computer look like it has more CPU's than it does. The kernel being preempted basically causes the old kernel's state to become associated with one of these fake CPU's and then the preempting context takes over a real CPU. [I'm really not doing justice to code in this oversimplified and possibly misinformed description.]

      In the future, I would hope that the need for a fixed number of fake CPU's would disappear and the "old fashioned" way of doing context switching would also disappear when the preemtable kernel option is selected. In other words, that would be the only way that context switching would normally occur, rather than having two ways of doing the same thing.

      I have always regarded the potential for a preemtable kernel as the biggest side benefit of the move to SMP in Linux 2.0, and I'm glad to see people turning it into a reality. However, maintaining the option of making a non-preemtible kernel may be worthwhile, at least for a uniprocessor kernel, because the preemtible kernel code relies on running an multiprocessing kernel (even on a uniprocessor), which has a slight performance cost in setting and releasing all those locks that never once experience contention.

      • I would expect that it would be enough to just change the time quantum from 1/100th of a second to, say, 1/5000th, by replacing the "#define HZ 100" in include/asm/param.h to "#define HZ 5000".
        What are you talking about? The reason you get skips in sound and such is because the kernel hogs the CPU for a long time, using spinlocks (kernel 2.4) or by disabling IRQ's and then doing a bunch of processing (older kernels). It's particularly bad during I/O storms, and thus the bad vm lately has caused people to complain about audio dropouts. Changing HZ is not going to do anything but make the kernel less efficent. Note that the current default is 1024 for some archs, which corresponds to 1ms. Everyone sees latencies longer than 1ms on a regular basis, even with the low-latency/preempt patches.

        --Bob

        • If you're talking about spinlocks and you're running on a single CPU machine (even with an SMP kernel), the kernel never blocks on a spinlock, because there is never spinlock contention (except for a kernel locking bug, where the kernel will lock up hard at that point). The overhead of the checking the spinlocks is also very small (nanoseconds on that single CPU system, especially since there is no cache snooping). So, the delays that are long enough to deplete sound buffers are going to occur because the granularity of time slices between processes is too long, not because of lock contention.

          With HZ=100, the timer tick is 1/100th of a second (ten milliseconds), and any process running at a CPU priority of nice 0 (the standard), nice -1, nice -2 or nice -3 will get five ticks (see the definition of TICK_SCALE in kernel/sched.c), so each time slice will be 50ms, which begins to approach the buffer size of sound cards when you have a few runnable processes, and is already much longer than video frame rates.

      • I think you've got some of the details wrong. First, changing the hertz timer wouldn't help anything. When people say that Linux is not preemptible, it means that when a process is running in kernel mode (as the result of a system call), the scheduler will not preempt it. It runs until it voluntarily blocks. Even if the scheduler is called more often, all that would happen is that it would allow the process to continue running more often. The result of this is that the maximum scheduling latency is dependant on the length of the system call paths. Long paths (such as disk access calls) cause spikes in latency. What the preempt patches do is they change SMP spinlocks into preemption locks. Each time a spinlock is taken, a preemption count is incremented. When it is released, the preemption count is decremented. Whenever the preemption count is zero, a context switch is allowed to happen. There is a good article here. [linuxdevices.com]
    • by DGolden ( 17848 ) on Wednesday October 10, 2001 @05:41AM (#2409980) Homepage Journal
      One thing to note, and I find myself saying this again and again, is that one of the simplest performance tweaks you can do is to negative-renice the X server. It's even mentioned somewhere in the X manual, and makes a hell of a difference.

      This means that the GUI then pre-empts background tasks, like on Windoze, and other systems intended for desktop use. Of course you don't want to do that on a server machine, but only Microsoft are stupid enough to do it by default even on their "server" OSes.

      I'd like to see "workstation" installs do it automatically, but there's a few small notes:

      (a) if you renice it too low, it also ends up pre-empting audio tasks too much, and audio could conceivably skip when you move windows about. Shouldn't happen on today's reasonably fast computers. Easily fixed by careful tuning, perhaps including renicing important audio tasks too if your computer's really slow.

      (b) If you're using the xfs font server, it needs tuning too - if it's starved of cpu time, then you might actually make text-heavy parts of the gui slower, not faster. I really wish distros would stop using xfs, since truetype support is now built into the X server, and server-side font support is being phased out thanks to XRender and Xft anyway.

  • This one is pretty sparse. WHAT changes were made with the emu10k driver? Did they change the bug that kills init on boot when you try to detect the game port? Did they update the way it reports, so that xmixer can control more things again? (What's with that, anyway? 2.4.2, I could control all sorts of stuff with xmixer. 2.4.10, I pretty much only have control over the volume.)
  • VM Changes (Score:4, Interesting)

    by Goonie ( 8651 ) <robert DOT merkel AT benambra DOT org> on Tuesday October 09, 2001 @10:01PM (#2409065) Homepage
    Back in 2.4.10, Linus made a fairly radical change in the virtual memory system - a rather unusual one for a stable kernel. While a lot of people are rather unhappy about it (notably Alan Cox, Rik Van Riel (the maintainer of the existing VM system), from all public accounts so far it seems that the new VM system works considerably better than the old one.

    So - - - Is that the case? Has there been any stability problems? Is the performance better (not that it really matters as a workstation user, but . .. )

    • Re:VM Changes (Score:5, Informative)

      by Ian Schmidt ( 6899 ) on Tuesday October 09, 2001 @10:17PM (#2409107)
      Performance under my normal working set (KDE 2.2 w/default theme + Mozilla nightly version + the CRiSP text editor + KMail + XMMS + GAIM + several xterms, with occasional compiles and runs of very large apps like Wine and XMame) is substantially better (faster, smoother, way less swapping) on 2.4.10 vs. 2.4.9. I should note I'm running 512 MB RAM and 640 MB of swap on 2 partitions, and the system barely ever goes to swap now (with the previous VM, just starting up that environment got me into swap and it quickly maxxed out the swap from there).
      So while I do appreciate Alan Cox's caution, the new VM works substantially better for me and I say "Go Andrea and Al!"

    • Re:VM Changes (Score:4, Informative)

      by TheGratefulNet ( 143330 ) on Tuesday October 09, 2001 @11:00PM (#2409170)
      I'm not sure I agree it works better.

      I ran all the 2.4.x's, both at home and at work. I am a software developer (not kernel, though) and so I beat on my systems pretty heavily. both systems run dualhead X and my work system additionally runs hardware (dac960) raid. cpu is a k7 tbird, in the ghz range.

      anyway, 2.4.9 was ok for me. I tried 2.4.10 and both my systems (home and work) locked up within days. hard tight lockup.

      I brought both back to 2.4.9, and so far, so good (less than a week running, though; it was only a week ago I went to .10 and had those problems).

      I, too, worry about 3k line commits to so-called 'stable' trees to radically change an algorithm or model. can't say for sure if .10 was really a dog for me, but my systems usually run for months and months before being rebooted (usually due to my swapping of pci cards and such, necessitating a shutdown to do the board swap). so it does seem unusual for me to have a modern linux kernel freeze on _both_ of my hard-working linux boxes. hmm..

      • Re:VM Changes (Score:3, Informative)

        by garcia ( 6573 )
        I noticed exactly the opposite. w/2.4.9 I was experiencing almost daily lockups (hard ones, fsck became my friend). Today was my first lockup w/2.4.10 since I installed it. I was running a bunch of crap in X, compiling a kernel and upgrading to the latest and greatest Debian.

        Machine went down hard as hell when I tried to logout of X.

        I am currently compiling 2.4.11 so we shall see how that goes.

        YMMV. Best of luck to you all. :)
    • Re:VM Changes (Score:5, Interesting)

      by CraigParticle ( 523952 ) on Tuesday October 09, 2001 @11:45PM (#2409283) Homepage
      It shouldn't surprise anyone that 2.4.10 VM performs better than 2.4.9. Even in terms of the "traditional" 2.4 VM from Rik, the Linus and Alan trees deviated starting around kernel 2.4.7. There were numerous complaints about the Linus tree missing important patches, and having contradicting patches applied. It ended up quite a mess, and VM performance reflects this. Alan's tree was much more conservative in this regard.

      If you compare 2.4.11 to anything, please compare it to the latest -ac kernels from Alan, where the traditional 2.4 VM is actually working very well. There's NO sense in comparing 2.4.11 to 2.4.9; the VM in 2.4.9 and its kin -- it was just plain broken.

      Side note: In Rik's VM, please remember to not just look at swap used as a gauge of whether you're swapping or not. All anonymous pages are mapped to swap, so the space is simply allocated. You can create a huge image in GIMP [gimp.org] and lots of swap will be allocated, but without a drop of disk I/O! Use vmstat and look at the 'si' and 'so' columns to see if you're actually writing pages to swap. Or look in /proc/meminfo and subtract "SwapCached" from the amount of swap you think you're using. That's the amount of *written* swap you're using (a better comparison to 2.4.10). This needs to be made sensible in 2.5, if this VM is to be resurrected.

      Andrea's work has cleaned up the handling of inactive pages (which could have been done under the old system), and the new "classzone" approach and VM balancing isn't documented anywhere outside the code itself. In addition, there are very normal loads where it performs badly compared to the -ac tree. Here is a test suite [arizona.edu] that tests different aspects of aging and swapping, and the results as provided to linux-kernel [helsinki.fi]. 2.4.10 (patched with Andrea's VM tweaks) swapped more pages, took longer, and had to swap more pages back in when the tests completed (i.e. it could have chosen better pages to swap out). It also caused XMMS to skip mp3 playback throughout the tests, whereas -ac didn't.

      Nothing's perfect of course; a process that randomly walks through pages performs better in 2.4.10 [helsinki.fi] since it's more streamlined and not trying to be as "intelligent" about page handling. Rik's code could no doubt be improved here.

      That's the great thing about open source: let the best idea win! No doubt in 2.5 we'll see these two VM schemes hash it out in much more complete form (i.e. lose the remaining kernel 2.2-isms, maybe add physical page mapping, almost certainly swapfs -- mostly for Rik's scheme; I'm not sure what the next steps for Andrea's VM should be).

  • ext3 (Score:2, Interesting)

    PLEASE for god's sake, merge ext3 into the kernel. it's nice and stable, and i'm sick of patching.
  • by Shane ( 3950 ) on Tuesday October 09, 2001 @10:36PM (#2409110) Homepage
    My personal feelings on the new VM is that it was the right decision. The VM problems have been going on for months. When people would report a problem, Rik would pretty much say: I don't have time to work on so and so.. feel free to pay me or convince my employeer to fund the work. Which is fine, that is his choice... But if I was Linus this would make me more open to looking at alternitive approches even if the short term risks were moderate.

    It is also interesting to note that Rik's vm core has had say 15 kernel releases (unstable + stable) to become stable and meet up to the expectations that Rik sold the kernel hackers on in the first place and judging from the reports on LK it is just now becoming stable enough for most work loads.

    The new 2.2.10+ VM had a couple minor to moderate problems for _SOME_ work loads but over all has received very good reports as far as I can tell for being so new. 2.4.11 is bound to be even better.

    Some people are complaining about the inclusion of major VM modifcations in the stable tree. I believe the truth is that most people that use Linux in production do not roll their own kernels. They use the vendor supplied kernels. Redhat for example will be releasing a 2.2.7-11-AC kernel which uses Rik's VM, it is what they have been testing for months and thus is what they will end up shipping. So the fact that Linus made this change in the "Stable" tree makes very little difference to me from a stability stand point, and I think it will prove to be a very good call in the short/medium/long run.

    Thats my 2 cents anyways.

    • VM in 2.4.10 is absolutely broken. The LKML is rife with reports of hangs, strange behavior, evil performance, etc, under heavy loads..

      Pretty much fixed in 2.4.10-ac10-eatcache. Almost as fixed in 2.4.11, but more work definitely needs to be done before a company like RedHat will be willing to ship one of these kernels with the new VM code.
    • by KidSock ( 150684 ) on Tuesday October 09, 2001 @11:59PM (#2409302)
      The problem with Rik's VM was Rik. He has been an arrogant piss ant for as long as I've been watching the list. He obviously ain't no dummy and I have no problem with working with people like that but I think Linus was itchn' to get that monkey off his back. They were applying all sorts of desperate patches ("tuning") and falling all over each other in the process. They just don't know why his VM goes off into la la land under high loads. What do you do about that? Stable or not Andrea totally rewrote the VM in like 5 weeks. Sometimes rewriting something from scratch like that is just the Right Thing to do. Linus saw that on the surface it worked better than Rik's and took it as a blessing. Sure 2.4.10 was bleeding before it left the gate and immediately needed triage (anyone running 2.4.10 should get this release patch folks) but so far it's not been a disaster like some people have been warning about. In fact most people claim it's quite a bit better than Rik's. If you've been using 2.4 without luck, try this one folks.
    • I believe the truth is that most people that use Linux in production do not roll their own kernels

      I don't think your right there at all. Companies are more likely to tweak the default installation, recompile the kernel for a known set of hardware, and then roll out a "company standard", using for instance RedHat's kickstart scripts.

      Using the stock kernel is made very difficult at least for RedHat users. RedHat's ongoing refusal to support reiserfs while installing, only recently while upgrading, and shipping (at least with 7.1 from memory) a reiserfs module that was significantly slower due to debugging being left on makes kernel recompilation necessary.

      I can understand their reservations, but faster fsck times isn't the only reason to move away from ext2
    • When people would report a problem, Rik would pretty much say: I don't have time to work on so and so.. feel free to pay me or convince my employeer to fund the work.


      From what I understood, Rik was making changes/fixes but Linus was not applying them. Alan Cox was saying he was tired of resubmitting the same VM changes to Linus. I only lightly read the kernel mailing list, but if this is accurate, then it is really Linus's fault for the behavior of the old VM. From what I understand, the VM in ac kernels is not bad either and it is based on Rik's VM work.

      • Being someone who reads all the posts from the core kernel hackers (at least those that are public) I feel pretty confident in saying for the longest time Rik was to busy to fix bugs. Again he has this right.. Once other people started writting VM code (I think it started once pushonce was being tested by Daniel phillops) Rik has been churning out code at the rate he was in the pre 2.4 release days (back when he is bidding to get his code included). So I would not be suprised in the least that once pushonce was included that Rik patches have been ignored... the reason for which should be obvious Linus decided to take another direction with the VM and Rik's patches were incompatible with that direction.
    • The new Andrea VM is *much* smoother and more reliable for me in my standard desktop "working set". My machine has 512 MB RAM and 640 MB swap. I run KDE 2.2 and normally have Mozilla, KMail, the CRiSP editor, XMMS, GAIM, and a sprinkling of xterms open and doing stuff. I update and compile several large projects frequently including Wine and XMame.

      Prior to 2.4.10, this resulted in the machine gradually filling all swap and then becoming very slow. With the 2.4.10+ VM my system rarely if ever touches swap, and when it does it often eventally comes back out of swap when necessary. It's overall much faster and smoother, and my HD runs less. I haven't tried any of the late model AC kernels where Rik actually started fixing his problems (spurred on no doubt by Linus giving up on him) - they may also run well too, I don't know.

      What I do know is 2.4.10 and .11 are among the smoothest kernels I've run since back in 2.2 (as Alan points out, Andrea was ultimately responsible for smoothing out 2.2's VM as well).

      One caveat with 2.4.11: starting with 2.4.11pre5 it plays very poorly with USB MS Intellimice. I have to unplug mine while booting 2.4.11 or else I get a continuous scroll of errors and no further boot progress (plugging it back in later resulted in normal operation including in X, but I'm still wary of the updated USB drivers).
      • Hmm, I don't know if that's you're problem, but your swap/RAM distribution has problems. Rik has pretty much flat out said that your swap needs to be at least twice as large as your RAM or the VM stops working right. Either way, though, 2.4.10 does run much nicer on my end too.
    • I believe the truth is that most people that use Linux in production do not roll their own kernels. They use the vendor supplied kernels. Redhat for example will be releasing a 2.2.7-11-AC kernel which uses Rik's VM, it is what they have been testing for months and thus is what they will end up shipping.

      Anyone running a production set worth their salt will be running their own kernel base, tuned for their own environment. The vendor kernels are a compromise, trying to please everyone, with every service you could ever imagine compiled in (and hence every bug/exploit included). Production boxes doing serious work are more likely to have a kernel set built for the purpose.

      Vendor kernels are far more likely to be used by people who are not that bothered about kernels and stability

      FWIW my production boxes run a 2.2.19 heavily patched.

  • Gotta love SDSL. I hope to have 2.4.11 running on my workstation tonight and on the servers at work tomorrow.

    Anyone happen to know if there's a RH 7.X-friendly .rpm available for those that are too timid to compile and install their own kernel? Several folks at my office will only install .rpm kernels. Would be nice to get 2.4.11 going at work as soon as possible. I only know a small about of rpm voodoo, so I suppose I'll give it a shot if one isn't already available.

    Thanks in advance!
    • On your servers? (Score:2, Insightful)

      by Bake ( 2609 )
      Stable branch or not.
      You really should NOT run production servers (the ones at work anyway) on the latest and greatest kernels.

      Who knows what data corrupting bugs are in a new kernel? I recall a few years back when a kernel was released in the that corrupted data over time. (Albeit that was in the testing branch, 2.1.44, but it's a matter of principle).

      At least set it up on test servers first before launching on production servers.

      Do yourself (and us) a favour, try before you buy.
  • IrDA (Score:3, Insightful)

    by SilentChris ( 452960 ) on Tuesday October 09, 2001 @11:39PM (#2409270) Homepage
    Oh boy, IrDA has been updated. *groan*

    NTFS, NTFS, NTFS boys. In a year or two most systems out there will have it in XP, and Linux will be catching up to support it. We can make a run for a majority of the NTFS 5.0 changes now, so at least people will be able to access their drives.

    • NTFS has been supported since at least 2.2.x.
      • that so ? i have 4-5 NTFS partitions and my fresh install didn't understand any of it (couldn't mount it at all). I'm a linux newbie, but i have some friends who live the os, practically, and their understanding is NTFS support is "not good". "there, but not good".

  • I think that was one of the things causing some MM problems under heavy loads. Have they gotten rid of this yet? I think it was gone just after 2.4.10. But, I don't like the sound of "resurrect oom handling" in the 2.4.11 changelog.

    -Chris
  • by labradore ( 26729 )
    I don't think that anyone takes the kernel versioning as seriously as they used to. I thought that stable kernels were not supposed to include any really new core features but mostly just bug fixes and perhaps new drivers, etc.

    Rik's VM should have either showed up in the 2.3 tree and been stabilized there before entering 2.4 or the 2.5 tree should have been opened with it. I guess since 2.4 had to be pushed out the door (and I'm glad it was) there was no time for his VM to mature inside 2.3. But would it be worthwhile to let those ideas stagnate? So much really new activity has been going on since 2.4 that perhaps it would be too hard to manage 2.4 and 2.5 kernels with lots of active development going on both simultaneously.

    It seems to me to be a hard management decision to make. The 2.4 series needed a lot of fixes and at the same time there has been a lot of new stuff floating around. Would introducing the 2.5 a few versions ago have slowed development on 2.4 and increased overall patch-management headaches? I suppose the answer is yes but I don't have an idea about how badly it would slow things down.

    I do think, however, that it is wonderful to have both Linus and Alan Cox around and maintaining diverging credible trees. They can both gain perspective watching the other's code grow and break. When the two trees do finally merge again we (hopefully) will have the best characteristics of both.

    • I lost faith in Linux versioning and tree management a looooong time ago. I pretty much stick with distribution kernels these days. There are several things wrong with the current process, which could be fixed.

      There needs to be OVERLAP of development kernels. For example, when 2.3 turned into 2.4-test, the 2.5 branch should have IMMEDIATELY shown up. That way, there is always a place for those who are good at doing new stuff and a place for those fixing what's there. This also greatly increases turnaround time. Also, Linus sucks at maintenance. He's good at development, but not at stabilizing and maintaining. Alan Cox is wonderful in that area. The _instant_ 2.3 became 2.4-test the reigns should have been handed to Alan Cox, to be released as 2.4.0 whenever Alan said it was ready. That way, Linus can spend his time dreaming up wonderful things and Alan can make it all work.

      Anyway, I'd post this to LKLM, but I don't have time to be a kernel hacker myself.
  • No trolling intended. Just a plain question.

    To an outsider, it would look like the answer is Yes:

    • AC e LT source trees are diverging on issues sometime techical sometime 'political'. There will ever be a full merge?
    • The 2.5 tree is not coming out, and the 2.4 is merging huge and 'revolutionary' patches.

    It seems to me that the model which worked so well for 2.2/2.3 series is not working anymore. In a true bazar fashion, a new model is already trying to define itself, and the AC and LT tree may be part of it. Maybe it is just time to admit it and try to define the new model a bit more clearly, if possible.

  • by Erik Hensema ( 12898 ) on Wednesday October 10, 2001 @04:24AM (#2409858) Homepage

    During the stable life of 2.4.x it became more or less clear to me that the current model of development for the Linux kernel doesn't work very well.

    Changes that were too experimental for a stable kernel but too important to be deferred to an experimental kernel were included in 2.4.x all the time (the VM changes in 2.4.10 being the best example).

    This makes me wonder: isn't it possible to improve the scheme of x.even.y = stable and x.odd.y = unstable? Even as we speak the -ac series provides an experimental kernel within the stable series. Maybe we could enhance this model into something more official.

    I'm not sure about the actual form yet. I was thinking about something in the line of three kernels:

    • Stable: users should be able to rely on this blindly. This kernel works. Each and every release.
    • Testing: this kernel should evolve into the next stable kernel. More ambitious than the current -pre kernels; longer running development and more testing. Yet, nothing really radically new.
    • Experimental: playground for hackers. New features are introduced here.
    The 'Testing' branch is new. I imagine these kernels to be released every month or so, at about the rate the stable kernel is released now. As soon as the Testing kernel proves something works and it stable, it's up for inclusion in the stable kernel.

    Stable kernels should IMHO be lower-paced. Maybe a major release every four to six months or so. The VM is allowed to change radically, but only after having been tested extensively in the Testing series. Offcourse simple bugfixes should be allowed in. This would give us a stable kernel every month. It just wouldn't be a terrible interesting one, as it should be.

    The Experimental kernels are as experimental as the current x.odd.y series.

    • What you describe is like the current Debian branch model, which seems to work very well.

      If stability is your highest priority, you stick to the stable release, which is pretty much guaranteed uptime. So it's good for important servers. Can get out-of-date quickly, though, depending on your needs.

      For desktops and less mission-critical servers, the testing release suits. Get the (almost) latest software, and retain a good level of stability.

      Developers and masochists get to play with the unstable release.

      Could be a good idea.
  • I won't run 2.4.11 (Score:4, Interesting)

    by Stonehead ( 87327 ) on Wednesday October 10, 2001 @04:54AM (#2409910)
    Just like the majority of you readers, I am not a kernel developer. But I like to know what I'm running. My conclusion is that if you want a stable kernel, ignore Linus' tree and use the Alan Cox tree. To say it blunt, 2.4.10+ really is 2.5, and you should only run it if you are prepared for some weird behaviour.
    Now am I a troll? Hope not. I did get my info out of Kernel Traffic [zork.net], which I've been reading for months. It is a very good, understandable and clear compression of all important things that happen on the linux kernel mailinglist. If you use Slashdot as your only information portal about the kernel, you are *braindead*.
    Ok, now my point - it is the VM subsystem. By now you should know that 2.4.x, until recently 2.4.10, used the VM code by Rik van Riel. That code has taken some time to develop, but you definitely can't blame Rik as the cause for all 2.4 stability problems, as well as the eternal delay of 2.5. But according to the l-k list, Linus himself made several errors in including Rik's patches, which indeed caused 2.4.7 and up to be unstable! Ok, now stop and think about this. Linus has an enormous responsibility. He didn't realize where the fault was, but he did perceive that the stable kernels were NOT stable. He knew that Andrea Arcangeli was still working on his own VM (that work improved Rik's VM too in 2.3. Not having a monopoly really does improve invention!) Then Linus made the big step: even in a *stable* series, he took over Andrea's VM and threw out Rik's one. This is really an important decision, and I applaud it!
    The only thing Linus should not have done, is labeling this thing 2.4.10. It really is 2.5. [lwn.net] For the big public, that kernel was definitely everything but a stable kernel. Luckily a lot of problems have been solved since (2.4.11 is a hell of a lot better than 2.4.10), and I consider Andrea Arcangeli really a good coder, but actually I trust Alan Cox most. He commented that Linus' recent kernels trashed several boxes of his overnight. Alan really sees the -ac tree as the stable one currently. I run 2.4.9-ac18 too, with the kernel preemption patch as mentioned earlier, on a p2-233 with quite some load, and it doesn't show any strange behaviour. (The kernel preemption patch doesn't do really much here: I still get skips when I record an mp3 from my soundcard and switch desktops in the meantime. But I should not expect wonders :))
    One last thing: Rik van Riel's VM has improved *too*. Alan Cox catches up with his patches very speedily. No more big bugs; Rik even added some optimizations in 2.4.9-ac16. I can't see that of course, but overall the system is a lot more responsive than 2.4.3-pre6, my last kernel before this one.
    So my advice: use the ac-series [kernelnewbies.org] of the kernel. Linus has made some wise decisions. I think he should start 2.5 and leave 2.4 to Alan, before people go sulking about 2.4.10 versus the always-stable reputation of the Linux kernel.
  • Looking at the impact on 7.2... the big changes in VM say something about the older VM that will no doubt be packaged with redhat. Hope they can get any issue with it nailed down, because their .2 series has always been rock solid stable. Ahh well, there is always .3
  • I'm still happily running 2.4.3 on everything, it still works as well as it did when I installed it.

    As always, what compelling reasons is ther to upgrade? It's not like other O.S.'s where you have to unless you want major security or stability issues. and I have yet to find one app that has a kernel requirement.

    Add to that the fact that RedHat 7.1 is a major pain in the arse to upgrade without the blessed redhat rpm packages (Hey, at least I got work to run linux, and it had to be redhat for the support and the fact that the CEO holds some RHAT stock.

    If someone could come up with a decent way to install a current kernel in RH7.1 without breaking everything that runs on startup (kudzu and all the other fodder) without waiting for Redhat to put one together and bless it.

    other than that one issue, there is no reason for a corperate user or the regular user to upgrade the kernel.
    • by tao ( 10867 )

      XFree 4.1 requires a v2.4.10 or v2.4.11 kernel to use DRI/DRM. On the other hand, Xfree 4.0 doesn't work with v2.4.10/v2.4.11.

      Other than that, the need for upgrading is mostly if you experience problems or have new hardware.

      AFAIK you can use make rpm to build an RPM of your kernel nowadays (new in v2.4.(some number > 3). For Debian, the counterpart is ake-kpkg which has existed for ages.

      • by Eil ( 82413 )
        XFree 4.1 requires a v2.4.10 or v2.4.11 kernel to use DRI/DRM.


        There's always the possibility that I could be missing something here, but... either I'm highly insane in you are very wrong. According to my XFree86 log, I'm running version 4.1.0 (Released on June 2, 2001).

        Would not this mean that XFree 4.1 was released before there even was a 2.4.10 kernel? My X setup is the same one that came on Slackware 8.0, which ships with Linux kernel version 2.4.5. I've been playing Quake3 and Unreal Tournament on this setup for months now, DRI and all.
  • Big Endian Reiser? (Score:3, Interesting)

    by hey! ( 33014 ) on Wednesday October 10, 2001 @08:36AM (#2410254) Homepage Journal
    Does anyone know the "Reiserfs cleanup" noted in the changelog include big-endian support?

    The base reiser code ONLY supports little endian architectures (shame!). I recently put one of my PPC based servers on the AC tree to get big-endian reiser support, but I've heard the AC tree patches have file fragmentation problems. I'm a little nervous about going live with this thing because of the reported VM problems and a potentially flaky reiserfs.

  • Anyone else out there find that after compiling 2.4.11 and then recompiling the nvidia kernel module X wouldn't work. I did.. I tried older versions of the nvidia drivers as well.

    Oh well.. its back to 2.4.10 for me..
  • Let's hope what Linus said about other operating systems is not true:

    5. What do you think of the FreeBSD 5 kernel and WindowsXP's new features from a clearly technical point of view?

    Linus Torvalds: I don't actually follow other operating systems much. I don't compete - I just worry about making Linux better than itself, not others. And quite frankly, I don't see anythign very interesting on a technical level in either.

    I think if this is true, Linux is being extremely stupid in this regard. Many operating systems have had serious design flaws that permanently staggered their development. Paying attention to other similar systems is a very important part of system development -- it keeps you from making the same mistakes others have made.

The biggest difference between time and space is that you can't reuse time. -- Merrick Furst

Working...