Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Open Source Upgrades Linux

Linux 2.6.37 Released 135

diegocg writes "Version 2.6.37 of the Linux kernel has been released. This version includes SMP scalability improvements for Ext4 and XFS, the removal of the Big Kernel Lock, support for per-cgroup IO throttling, a networking block device based on top of the Ceph clustered filesystem, several Btrfs improvements, more efficient static probes, perf support to probe modules, LZO compression in the hibernation image, PPP over IPv4 support, several networking microoptimizations and many other small changes, improvements and new drivers for devices like the Brocade BNA 10GB ethernet, Topcliff PCH gigabit, Atheros CARL9170, Atheros AR6003 and RealTek RTL8712U. The fanotify API has also been enabled. See the full changelog for more details."
This discussion has been archived. No new comments can be posted.

Linux 2.6.37 Released

Comments Filter:
  • Kernel locking (Score:5, Interesting)

    by iONiUM ( 530420 ) on Wednesday January 05, 2011 @04:04PM (#34769550) Journal

    Well I'm glad they officially fixed the kernel lock. Out of curiosity, how long until Ubuntu or Debian sees this integrated into their line? A year? Not trolling, I only started using Ubuntu recently, so I'm curious.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Ubuntu will ship with this kernel in their next release Natty in April

    • Re:Kernel locking (Score:5, Interesting)

      by Cyberax ( 705495 ) on Wednesday January 05, 2011 @04:09PM (#34769632)

      Ubuntu in about 6 months, 2.6.37 should be in the 11.04 release.

      In Debian Stable - in about 2 years (in the next release).

      • by Anonymous Coward

        Geez... it's a shame Lunis and the boyz couldn't be bothered to program it correctly the first time around.

      • by jimicus ( 737525 )

        In Debian Stable - in about 2 years (in the next release).

        Doubt it - the next release of Debian Stable can't be that far away, Lenny's starting to look rather long in the tooth and Squeeze was frozen back in August. I'd expect to see Squeeze become stable within the next few months.

        • I think the GP meant "in the next release after Squeeze", which probably will be in about 2 years from now.

      • Debian is currently packaging 2.6.37RC6, it would give a few weeks and they will have 2.6.37

        • Too bad they rarely build the kbuild package for experiential kernels so that users can actually install the needed kernel header packages.

          I don't understand why they even upload them when they are not instable due to dependency conflicts. That said, we are talking about experiential so there is not suppose to be a guarantee that it is installable. It really does feel like a strange effort though since anyone using hardware or software that requires a kernel module to be compiled cannot assist with the te

    • Re:Kernel locking (Score:4, Informative)

      by turbidostato ( 878842 ) on Wednesday January 05, 2011 @04:10PM (#34769644)

      "Well I'm glad they officially fixed the kernel lock. Out of curiosity, how long until Ubuntu or Debian sees this integrated into their line?"

      Don't know about Ubuntu but since Debian is already "frozen" towards its next release (codename "squeeze") you can bet it will be about two years from now, more or less.

      Of course, you will see it much sooner on their development lines, "Testing", "Unstable" and/or "Experimental".

    • In debian it will be available soon in unstable, the RC is already available in experimental.

      Package linux-image-2.6.37-rc7-686

              * experimental (kernel): Linux 2.6.37-rc7 for modern PCs
                  2.6.37~rc7-1~experimental.1: i386

      • In debian it will be available soon in unstable
        Afaict the general policy in debian is to only upload stuff to unstable if it's targetted at (note that targetted at doesn't nessacelly mean will be in) the next stable release. It seems unlikely that debian would try to push a new kernel version at this point in the release cycle.

        So I wouldn't expect it to hit unstable until after squeeze releases.

    • by Björn ( 4836 ) *
      And what about Fedora?
      • Check the "rawhide" repositories. Fedora tends to track the -rc kernels fairly closely with near-daily builds. Therefore it is likely that fedora will have this in rawhide within a day if not already.

      • by arth1 ( 260657 )

        Fedora 15 has a planned release date of May 10.
        It will most likely have 2.6.37 (they're currently at .36, but several people have made .37-rc versions, and the deadline for version bumps and features isn't up yet).

      • As mentioned, the 2.6.37 kernel is in Fedora rawhide now, and it works fine with the current (Fedora 14) release if you want (or need) to run it:

        http://koji.fedoraproject.org/koji/buildinfo?buildID=212634 [fedoraproject.org]

        The official Nvidia driver installer compiles and runs cleanly against it, making early use easier.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      It's not a complete removal of the BKL. There are still some drivers (and a couple of filesystems I think) that have not been converted. Selecting those in a config as a built in or standalone will re-enable the BKL. IIRC, the plan to have those corrected or obsoleted is scheduled for .38

      • by korgitser ( 1809018 ) on Wednesday January 05, 2011 @04:21PM (#34769798)

        And that would be the .38 special

      • This is my understanding of this as well. The Big Kernel Lock has not been completely removed from the kernel, however, it is now possible to choose a kernel configuration option that will compile a kernel without the BKL. This necessitates a BKL-disabled kernel not being able to use some of the modules which still depend on the BKL. However, none of the core modules depend on the BKL any more, and kernel developers are still working on removing the BKL from the handful of less important modules which de
    • by kbielefe ( 606566 ) <karl@bielefeldt.gmail@com> on Wednesday January 05, 2011 @04:22PM (#34769808)

      It already is [ubuntu.com], for very liberal definitions of "integrated." :-)

      • by Kjella ( 173770 )

        If you're just doing it for the newer version and don't need to change the code or config it's easier to grab the debs from the Ubuntu Kernel PPA [ubuntu.com] and install.

    • by micheas ( 231635 )

      Debian experimental should have it within a week or two.

      Debian is "Frozen" for the release of squeeze, so 2.6.37 will probably never make it into Debian unstable as it is likely that 2.6.38 will make it out the door before squeeze is released. (just a gut guess with no supporting evidence for the dates.) If my guess is right 2.6.38 will more likely make it into Debian than 2.6.37.

      Ubuntu will push a new kernel out in about three months, I don't know if it will be 2.6.37

    • I've got the RC for Ubuntu and have used it for a month or two. It fixed the networking, so that after sleep it would actually connect to the network. For some reason, the driver was programmed to fill with random values so it wouldn't sync unless you rmmod / modprobe ath9k.

      On the downside, the RC removes control of the backlight. What I'd like to see next is fixing the kernel so that the new generation of touchpads is recognized, or maybe something like ndiswrapper for other drivers.

    • This kernel should trickle down to 10.04 LTS as well. One of the big complaints about the 8.04 LTS version was that hardware gets released so rapidly that a 3-5 year old kernel isn't going to support a lot of it. Even right now the Ubuntu 10.10 kernel (2.6.35) is in the 10.04 repos that are enabled by default.

    • Note that the performance advantages of that change are zero. It's an aesthetical thing, so distros are not in eager of shipping it.

    • Re:Kernel locking (Score:5, Interesting)

      by Bootsy Collins ( 549938 ) on Wednesday January 05, 2011 @04:52PM (#34770210)
      Would someone mind explaining (for those of us who have some C experience, but aren't kernel hackers) what the Big Kernel Lock is? In particular, is this something that will impact the desktop user?
      • by Anonymous Coward

        It was a hacky way of providing SMP support that is less efficient than a non-blocking method. As things stand it will have no effect for almost anyone since all the core modules since they haven't depended on the BKL in a while.

      • Re:Kernel locking (Score:5, Informative)

        by Pr0xY ( 526811 ) on Wednesday January 05, 2011 @05:11PM (#34770406)

        It's a fairly simple idea. In any place that two threads of execution (be them real threads or interrupts or whatever) could possibly access the same resource at the same time, locking must be used to ensure data integrity and proper operation of the code. The "Big Kernel Lock" is a system wide "stop the world" lock. This is a very easy way to make the code "correct" in the sense that it will work and not corrupt data. But the downside is that while this lock is held... everything else must wait. So you better not hold it for very long and while it is easy to get correct, it has pretty bad performance implications.

        A better solution is a fine grained lock just for that resource, so the only threads of execution which need to wait are ones that are actually contending for that resource. The downside here is that it is much more complicated to get correct. So when implementing this, you have to be *very* careful that you got it right.

        The BLK has been in the process of being removed and has been phased out of the vast majority of the kernel for a while, this change is simply enabling a build in which it doesn't even exist if you don't build any of the older drivers which don't use more fine grained locking.

        • Re:Kernel locking (Score:5, Interesting)

          by Kjella ( 173770 ) on Thursday January 06, 2011 @02:41AM (#34774104) Homepage

          And this is one example of why the kernel doesn't have a stable ABI. You can bet tons of unmaintained third party drivers would use the BKL, so you could never get rid of it. From what I've understood purging it from every driver has been a pretty big job and only possible because all the drivers are in the kernel tree.

          • by m50d ( 797211 )
            So Linux is going to break the drivers for my webcam, PDA and TV card again? Forgive me if I don't jump for joy. Surely there'd be no harm in having the BKL hanging around - if you don't use any of the old drivers that use it, it'd never get locked and so never delay anything. So not having a stable API (I don't really care about the ABI, but the lack of an API is just atrocious) still looks like laziness from here.
          • by Pr0xY ( 526811 )

            Certainly ABI changes can be difficult to deal with, but personally I think it is better than keeping old code around like many close sourced systems end up having to do for compatibility purposes.

            I've found that a good (not perfect) solution is to write an thin abstraction layer between the module and the kernel. The module pretty much only deals with the kernel through this layer, so if the kernel changes, you have a central place to apply updates (nvidia does this and it has worked very well).

            In the end,

      • Re:Kernel locking (Score:5, Informative)

        by phantomcircuit ( 938963 ) on Wednesday January 05, 2011 @05:14PM (#34770452) Homepage

        The Big Kernel Lock is a Symmetric Multi Processing (SMP) construct. In order to make kernel operations atomic you lock the entire kernel. This works as a good initial locking mechanism because it's relatively easy to implement, it avoids issues like deadlocking very well.

        The problem with the BKL is that it locks the entire kernel, even if processes are calling functions totally isolated from each other only one can be in the kernel at a time.

        In practice the BKL hasn't been a big deal for a while now since the important (performance wise) parts of the kernel have had finer grained locking.

        So It's pretty unlikely to have much effect if any on desktop users.

      • by Anonymous Coward

        SMP support was added in 2.2 and the hard lock was a way to transition toward fine locking later. it only allows one thread to be in kernelspace at a time and as a result, causes problems when you try to scale the system. In other words, SMP gave the kernel support for parallel processing which is incredibly useful for running multiple tasks at the same time.

        http://www.ibm.com/developerworks/library/l-linux-smp/

        http://book.chinaunix.net/special/ebook/Linux_Kernel_Development/0672327201/ch09lev1sec8.html

      • Re:Kernel locking (Score:5, Informative)

        by Yossarian45793 ( 617611 ) on Wednesday January 05, 2011 @05:20PM (#34770514)
        The BKL was a hack added in Linux 2.0 to support multiprocessor machines. It was ugly but expedient (like most engineering solutions). Over time, multiprocessor support in the kernel has gotten much better, and the BKL has become less important, up until now when it's so unimportant it can be removed entirely.

        Nobody, especially not desktop users, will notice any change from its removal.
      • Put simply, when you have many independent bits of code competing for finite shared resources/time within the kernel (this is different than code just running in user space), you have to put locks on them so that only 1 thread can access them at a time. Once a lock is released then the another thread gets a turn. With a big lock, only one lock exists for every resource. Although a thread may only need access to a single resource, all of the resources get locked.

        The alternative is to implement more fine-gra

    • by mvar ( 1386987 )
      Somehow this question gives a strange deja vu
    • That's one reason I switched to Arch Linux. Updates make it in a lot faster. That's particularly nice for the browser.

      Hoped I wouldn't experience downsides from that, but I have. I'd say Linux still isn't ready for the Year of Linux on the Desktop. These sorts of problems show the wisdom of holding off on kernel updates. I want Firefox updates right away. But kernel updates? Maybe not.

      Kernel 2.6.34 and 35 had a problem in the e1000 driver, which seemed to affect only very specific motherboards,

      • by Korin43 ( 881732 )

        Arch does have an LTS kernel (although "long-term" in Arch is like 6 months), which you can use if the current version is broken. I used it for a while when wine + some kernel version caused World of Warcraft to not work. Hope this helps if you problems in the future.

        pacman -S kernel26-lts

      • Agreed. I've been using Linux off and on (started with Slackware ~1.0, kernel 0.99, yeah _that_ long ago) The running joke is that it is always $(Year)+X for the year of the Linux desktop -- seriously doubt it will ever happen for the masses. Windows + Mac OS X are just too entrenched and "good enough." As a programmer/geek, if I don't have time to dick around getting Linux to work, the rest of the non-computer illiterates don't have a hope. It is easier to just recommend OS X to non-computer people.

        Always

      • by deek ( 22697 )

        Reiserfs is pretty old now, and dying, and ext4 hasn't thrilled me.

        What features have you enabled in ext4? I'm running it on one of our servers, and I like the performance very much. It's even a debian stable machine, although I had to use a kernel from "testing" to properly enable ext4.

        I've got extents, uninit_bg, and dir_index enabled, amongst other features. If you converted from ext3, then you probably don't have these options enabled. Even if you created the ext4 filesystem from scratch, some older

        • Maybe I will. I didn't do anything special with ext4, which means I got the default settings, whatever they are. Don't think I had noatime. Anyway, seemed like ext4 wasn't as fast as a fresh Reiserfs partition. Ext4 rattles the hard drive more, judging from the noise. No, I don't have any hard data from carefully controlled tests to know whether ext4 really is less efficient. But I do know ext2 with journaling (aka ext3) isn't that great. Rather use xfs than ext3. Like its predecessors, ext4 is stil

      • by afabbro ( 33948 )

        Kernel 2.6.34 and 35 had a problem in the e1000 driver, which seemed to affect only very specific motherboards, like in one of my computers. No networking was an especially inconvenient problem.

        And kind of an unnecessary one, considering that the e1000 module code is available for free download on Intel's web site, is GPL, and is very easy to build. My box with an e1000e may be running 2.6.18, but the e1000e module is Intel's latest 1.2.20.

    • Re:Kernel locking (Score:5, Insightful)

      by Yossarian45793 ( 617611 ) on Wednesday January 05, 2011 @05:14PM (#34770454)
      For those that aren't aware, the BKL (big kernel lock) hasn't caused any issues except purist angst for a very long time now. All of the performance critical kernel code was fixed to use fine grained locking years ago. This change is just to satisfy the people who are offended by the architectural ugliness of the BKL. In terms of performance and everything else that matters, the removal of the BKL has absolutely no impact.
      • by TheLink ( 130905 )

        In terms of performance and everything else that matters, the removal of the BKL has absolutely no impact.

        How about reliability and stability? So they've reached the stage where stuff _definitely_ doesn't need the BKL?

    • Distro's with rolling releases - probably not more than a week or two. I've read somewhere that Ubuntu is considering change to rolling releases model too.
    • Fixed? How about total frickin anarchy running lose in your kernel. With no giant lock wielding a banhammer, kernel modules of all kinds will run rampant over anything and everything.

      It will be complete chaos.

  • by Anonymous Coward

    It's nice to see kernel improvements for Btrfs, but how is Btrfs progressing? It seems like it is constantly under heavy development (completely understandable), but have the guys behind Btrfs released some sort of working version, even if it doesn't do much?

    • Re:Btrfs (Score:5, Informative)

      by tibman ( 623933 ) on Wednesday January 05, 2011 @04:38PM (#34770056) Homepage

      It's running on my server at home, so i hope so ; )

      • Likewise, I just built a new computer and I put two 1TB drives in there and mounted them together as my /home using Btrfs. I haven't played around with it too much yet but so far it's been running buttery smooth. This is under Ubuntu 10.10 with a stock kernel so I'm not exactly at the cutting edge of kernel releases.
        Something I haven't worked out yet but was hoping to figure out was getting file cloning working (copy-on-write) in places where I want distinct files (no linking), but don't want to waste s
        • Re:Btrfs (Score:4, Interesting)

          by 0100010001010011 ( 652467 ) on Wednesday January 05, 2011 @05:38PM (#34770744)

          Before I committed ANY data to ZFS I sure as heck "played around with it" in virtual machines until I was comfortable doing about anything with it.

          "Pull" one of the drives. What happens?
          dd if=/dev/random of= to your disk in random places (skip/seek), what happens to your data.
          Pull all of the drives and replace it with a larger one.

          How are the user tools for btrfs? zpool & zfs are fairly well documented and have very simple short commands.

          Does it automatically share over nfs/samba like you can with ZFS on Solaris?

          • I see what you mean, but this is just a personal computer. I wouldn't be using this in any sort of production systems (nor am an IT guy).

            What appealed to me about using Btrfs was that I could dynamically add more disks as I was inclined to upgrade, I could get data striping, snapshots, data deduplication (not implemented yet though), and since I was planning on running Linux, there was no practical solution for using ZFS. The fact that I couldn't pull the drive or that corruption might take things down
            • by glwtta ( 532858 )
              The fact that I couldn't pull the drive or that corruption might take things down doesn't concern me too much because if I wasn't using Btrfs I'd be using Ext4 and I wouldn't imagine it would be any better.

              Corruption-wise Ext4 better as hell be better than Btrfs, since it was released as stable over two years ago.
            • You could just turn on LVM, which has been stable (and even the stock config in some distros) for years now and gives you dynamic volume allocation, data striping and snapshots with any filesystem. There are reasons to like ZFS/btrfs/etc., but the things you're asking for are easily available with much older, better-supported, better-documented solutions.

        • by vandy1 ( 568419 )
          https://btrfs.wiki.kernel.org/index.php/Problem_FAQ [kernel.org] has the answer: cp --reflink=always ... You will need a non-ancient version of coreutils for this to work. Cheers, Michael
          • You want cp --reflink=auto instead, so it works even if you issue the command across filesystems or on non-btrfs. There's no reason not to plop alias cp='cp --reflink=auto' into your .bashrc.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I've been using it for nearly 2 years without any issues whatsoever (I haven't even had to fiddle with it to keep it going). It's been in the kernel for over 1 year.

      They're well beyond "some sort of working version, even if it doesn't do much". Give it a try.

    • Re:Btrfs (Score:4, Informative)

      by Korin43 ( 881732 ) on Wednesday January 05, 2011 @05:44PM (#34770820) Homepage

      The problem with Btrfs isn't that it doesn't work (it works fine and has for years). The problem is that it's not very fast right now (most benchmarks I've seen show it slightly behind other major file systems in most tasks), and most things don't make use of the cool things it does.

      • by Rich0 ( 548339 )

        The problem with btrfs is that it is unstable and feature incomplete, and doesn't even have an fsck yet.

        I've even gotten it to panic with loopback devices, ext3 conversions, and mirroring.

        BTW, I define unstable as lots of patches in every kernel release. It just is very new for any real production work.

        It is probably good enough for casual use on straight partitions without use of its more exotic features. I would keep good backups though.

      • Re:Btrfs (Score:4, Informative)

        by loxosceles ( 580563 ) on Wednesday January 05, 2011 @08:47PM (#34772440)

        Compared to zfs, which is the only other quasi-mainstream filesystem that has copy-on-write (which gives snapshots for free) and data checksums, btrfs is almost always faster. Most of btrfs's slowness is due to those two features. Comparing ext4 speed to btrfs speed is not fair unless you disable both with -o nodatacow.

        http://www.phoronix.com/scan.php?page=article&item=btrfs_zfs_ssd&num=1 [phoronix.com]
        see page 4 for btrfs random write performance, which blows away both zfs and ext4 on hdds.

        Without COW and checksumming, btrfs is closer to ext4. Even so, except for applications that are reading or writing massive amounts of data, I'd rather have data integrity and SSD wear leveling and free read-only snapshots instead of maximum speed.

        • by TheLink ( 130905 )

          see page 4 for btrfs random write performance, which blows away both zfs and ext4 on hdds.

          I'd rather have data integrity and SSD wear leveling and free read-only snapshots instead of maximum speed.

          1) There are SSDs with decent random write performance now.
          2) Shouldn't the SSD wear leveling be done by the SSDs themselves?

      • Another problem is the fact that it has very poor crash recovery options. I losted a whole home partition thanks to the BTRFS getting corrupted on some level or other due to a power outage during as read operation, of all things. Not cool.
  • Ceph is really cool (Score:5, Informative)

    by Lemming Mark ( 849014 ) on Wednesday January 05, 2011 @04:39PM (#34770064) Homepage

    Ceph is a really cool bit of technology. It distributes storage redundantly across multiple machines, so you can store lots and lots of data and not lose any if one of the hard drives explodes. It should distribute the load of serving that data too. You can have a network filesystem based on this already, now they've added support for virtual block devices (i.e. remote disks) over it.

    If you combine that with virtualisation (the Kernel Newbies changelog mentions that there's a patch for Qemu to directly use a Ceph-based block device) then you can do magic stuff. e.g. run all your services in virtual machines with their storage hosted by Ceph. Provide a cluster of virtualisation hosts to run those VMs. If a physical box needs maintenance, live-migrate your VMs off it without stopping them, then just yoink it from the cluster - the storage will failover magically. If a physical box explodes, just boot the VMs it was running on other nodes (or, combined with some of the hot-standby features that Xen, VMware, etc have started to offer, the VMs are already running seamlessly elsewhere as soon as the primary dies). If you need more storage or more processing, add more machines to the cluster, get Ceph to balance the storage and move some VMs around.

    Not everyone is going to want to run Ceph on their home network but if you have a need for any of this sort of functionality (or even just an enthusiasm for it) then it's super cool. Oh yes and Ceph can do snapshotting as well, I believe. Ace.

    • by Anonymous Coward

      that sounds like a lot of work to look after for a home network....
      i wonder if it ever goes wrong ...

      • Not as much work as dealing with traditional RAID arrays. I end up having to RMA and/or replace a few disks a year, and while software raid keeps me from losing data in almost every case, it's a PITA. I'd much rather have that stuff managed automatically by a distributed filesystem. The next external disk enclosure I build is going to be part of a ceph filesystem or something similar. Then I'll reformat my recently built external 6TB raid-6 array and add those to the unified (e.g. ceph) filesystem as we

  • What's new (Score:5, Informative)

    by Troll-Under-D'Bridge ( 1782952 ) on Wednesday January 05, 2011 @04:57PM (#34770262) Journal

    The link in the story just points to the list post announcing a new major version of the Linux kernel. Note that the changes listed in the post are for changes from the last release candidate (-rc8) and not from the last major kernel release (2.6.36). For an overview, it's better to head over to Kernel Newbies [kernelnewbies.org]. It even has a section which summarizes the "cool stuff", major features that the new kernel brings.

    Interestingly, the overview appears to overlook what I believe is a major feature introduced in 2.6.37: power management for USB 3. I may have to do some more digging through the actual kernel changelogs. Maybe the change was reverted during the last few candidate releases, but I remember reading about it in H-Online [h-online.com], particularly this part:

    The XHCI driver for USB 3.0 controllers now offers power management support (1, 2, 3, 4); this makes it possible to suspend and resume without temporarily having to unload the driver.

    (In the original, the parenthetical numbers are links to the kernel commits.)

    Power management for USB 3 would have been the most important new feature for me. Without it, you have to resort to a number of ugly hacks to hibernate or suspend a laptop or a motherboard with USB 3 enabled. (Turning off USB 3 in the BIOS is a hardware hack that allows you to bypass the software hacks.)

    • I missed the second link in the story, which does point to Kernel Newbies. (Blame it on my browser which doesn't color links in red or some other obscene color.) However, my comment about USB 3 still stands. I'm still trying to find a "news" source that highlights the new XHCI power management feature. Failure to hibernate/suspend because of non-working USB 3 power management is an issue that's been discussed in a number of forums.
  • Looking through the changelog I couldn't find anything immediately evident about whether or not the "200-line kernel patch which does wonders" was included in this release or not. Here is the related original post [slashdot.org]. Anybody know if it will be in there for certain? I may have to remove my Ubuntu alternative workaround outlined in the subsequent article [slashdot.org] before doing an upgrade.
  • by isolationism ( 782170 ) on Wednesday January 05, 2011 @06:44PM (#34771512) Homepage
    I have dual-processor Xeon with six cores each, meaning there are effectively 24 threads (2 physical * 6 cores * 2 hyperthreading) and the system will lock up for SECONDS at a time during large IO operations. The file system is XFS over an 8-disc hardware RAID10 on 15K RPM drives. Seems to be most noticeable when copying to/from the network, although I'm not convinced the network is the problem here. For such a high-end machine these stalls are unbearable; I had (a lot) less difficulty with only 4 cores and less/slower drives in a hardware RAID 0.
    • Fuse-NTFS-3G makes my system really crawl when there is USB-disk IO. UI and everything else is totally stalled for long periods. I'm not Linux guru, but I just confirm that I'm also expering IO issues even when system load should be pretty low. "Linux Bender 2.6.32-27-generic #49-Ubuntu SMP Thu Dec 2 00:51:09 UTC 2010 x86_64 GNU/Linux". I guess some tweaking might be required to fix this issue.
    • You're probably running into this long standing IO bug [kernel.org], which despite complaints for many years, has still not been properly diagnosed. A big mystery, evidently.
      • Argh.

        I'd love to volunteer my hardware to help, but by the length of the bug it sounds like they probably already have plenty of people with a variety of hardware willing to run diagnostics -- and the addition of one machine sounds unlikely to solve something that's been ongoing for three years running, now.

        • by zx2c4 ( 716139 )
          Actually, I don't think so. If you're decent at bisecting and can find a reproducible test case, you'd probably be able to help quite a bit. There's been a lot of noise with too little refined testing on this bug. And it appears like there might be multiple things affecting it, on different types of hardware, etc. Basically, the current diagnostic seems like a mess. So by all means, dig in and start debugging.
    • You don't say which kernel you are using so it could be the problem you are seeing has been fixed by a previous kernel. However it is unlikely the removal of the BKL will make a difference to you if you're using 2.6.36 since most subsystems were already using fine grained locks of their own before this. There might be another different change in 2.6.37 that helps but I'd say its unlikely...

      • Linux mars 2.6.36-gentoo-r6 #1 SMP PREEMPT Sun Jan 2 15:46:06 EST 2011 x86_64 Intel(R) Xeon(R) CPU X5670 @ 2.93GHz GenuineIntel GNU/Linux

        I am indeed using 2.6.36, and I'm using the "patchless" alternative cgroups approach (which involves creating some new nodes and stuffing a few lines into ~/.bashrc. I was thinking it was less likely that the BKL would make a difference and hoping maybe some of the other things mentioned in the article would make a difference, like the cgroup io-throttling integration an

  • So, does this mean I don't need to play around with rp-pppoe or pppoe-conf (or equivalent) for DSL setup/configuration? And if yes, how then?

    • by ESD ( 62 )

      Unfortunately not. PPPoE runs the PPP protocol over Ethernet, not over an IPv4 connection (which in turn usually runs over Ethernet)
      This will probably be more useful for creating tunnels between different IPv4-connected hosts, such as for VPNs.

  • by Anonymous Coward

    Did Mike Galbraith's per-TTY task groups patch make it? I can't find any reference to it in the release notes.

  • Versioning? (Score:2, Interesting)

    by fuzza ( 137953 )

    So, what's the deal here - have they pretty much abandoned the old "odd minor releases for development, no new features in stable versions" plan, or what?

    • by shish ( 588640 )
      Yeah, they abandoned that a few years ago, current plan is something along the lines of "six weeks development, two weeks bugfixes-only, release, repeat", incrementing the third part of the version number each time (ie there are no plans for the "2.6" part to ever change AFAIK)
  • Requiring all sort of synchronisation and a release cycle. Plan9 has a release *every day*.

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...