Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source Linux

Linux Kernel 2.6.32 Released 195

diegocg writes "Linus Torvalds has officially released the version 2.6.32 of the Linux kernel. New features include virtualization memory de-duplication, a rewrite of the writeback code faster and more scalable, many important Btrfs improvements and speedups, ATI R600/R700 3D and KMS support and other graphic improvements, a CFQ low latency mode, tracing improvements including a 'perf timechart' tool that tries to be a better bootchart, soft limits in the memory controller, support for the S+Core architecture, support for Intel Moorestown and its new firmware interface, run-time power management support, and many other improvements and new drivers. See the full changelog for more details."
This discussion has been archived. No new comments can be posted.

Linux Kernel 2.6.32 Released

Comments Filter:
  • by c0l0 ( 826165 ) * on Thursday December 03, 2009 @10:08AM (#30310310) Homepage

    I'm not perfectly happy with the term "virtualization memory de-duplication". Linux 2.6.32 introduces what is called "KSM", an acronym that is not to be confused with "KMS (Kernel Mode Setting)" and expands to "Kernel Samepage Merging" (though other possibilities with similar meaning have already emerged). It does not target virtualization or hypervisors in general (and QEMU/KVM in particular) alone. KSM can help save memory for all workloads where many processes share a great lot of data in memory, as with KSM, you can just mark a region of memory as (potentially) shared between processes, and have redundant parts of that region collapse into a single one. KSM automagically branches out a distinct, exclusively modified copy if one of the processes sharing those pages decides to modify a certain part of the data on its own. From what I've seen until now, all that's needed to have an app benefit from KSM is a call to madvise(2) with some special magic, and you're good to go.

    I really like how Linux is evolving in the 2.6 line. Now if LVM snapshot merging really makes it into 2.6.33, I'll be an even more happy gnu-penguin a few months down the road!

    • Re: (Score:3, Informative)

      by 1s44c ( 552956 )

      I'm not perfectly happy with the term "virtualization memory de-duplication".

      The term is a little nonspecific. However KSM is truly wonderful and I look forward to saving a ton of physical memory over my KVM machines when the kvm/qemu userland tools catch up.

      This is already in redhat's virtualization stuff.

      • by Bert64 ( 520050 ) <bert@slashdot.fir e n z e e . c om> on Thursday December 03, 2009 @11:56AM (#30312186) Homepage

        I have a system running a 2.6.32-rc6 kernel with KSM and the latest kvm (which includes support for this, but its turned off by default)... Because i run a number of virtual images that boot the same kernel and system libs (different apps ofcourse), it saved me over 1gb of memory on the host.

        • How much memory did the images use in total before the change (i.e. how big was the savings in %)?

          Of course, every byte is precious as long as it doesn't affect performance, but it would be interesting to know how much more images one can expect to run on one computer. :)
        • by Hatta ( 162192 )

          kvm (which includes support for this,

          Does each application need to support KSM? I can't just run two instances of the same arbitrary application and let the kernel figure out what to do?

    • KSM (Score:2, Funny)

      by svtdragon ( 917476 )

      Linux 2.6.32 introduces what is called "KSM"

      WHAT!? I know Linux users are pretty militant (myself among them), but to implement terrorism [wikipedia.org] in the kernel?

      Please tell me it's at least built as a *module* by default!

    • That sounds interesting. Wouldn't this also resolve the negative effect of static libraries to a large part (having multiple copies of the same code in memory)?

  • 2.6.32's KMS and R600/700 improvements are expected to give a huge 3D performance boost to the open source ATI drivers - can't wait to test this!

    • I'm excited about the ATI improvements making it into the kernel too. Wonder if Ubuntu Karmic will pick up the new kernel after some testing?

      • by Cyberax ( 705495 )

        No, you'll have to wait for Lucid Lynx (Ubuntu 10.04) for these changes.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        No, as Ubuntu Releases are version-stable, and backport security fixes only (Firefox being the exception of that rule). You may install the kernel from the mainline kernel PPA though: http://kernel.ubuntu.com/~kernel-ppa/mainline/ [ubuntu.com]
        Just fetch the .deb that fits your architecture, and install it via `sudo dpkg -i /path/to/your/downloaded/archive.deb`.

    • 2.6.32's KMS and R600/700 improvements are expected to give a huge 3D performance boost to the open source ATI drivers - can't wait to test this!

      This is indeed excellent although it needs to be backed up by support from the X driver. Currently I am running Ubuntu Karmic on a Radeon HD 3600 series card (RV635, which counts as an R600 series - quite confusing) and 3D support sucks. Both the "radeon" and "radeonhd" drivers only have basic support for these chips - desktop effects don't really work.

      I was using the fglrx driver on Jaunty, which worked OK, but it seems to be getting worse with every release. In Karmic it was so broken I just gave up on it

    • Re: (Score:3, Informative)

      by RubberDuckie ( 53329 )

      The Fedora team has backported the KMS and R600/700 improvements to FC12, which I've been running for a few weeks now. While it's better than nothing, 3d performance still has a way to go. The performance of my old Heretic II game is still unacceptably slow.

      The ATI drivers usually took the sacrifice of a goat to get them to work, but their performance was far superior. Too bad ATI won't support recent releases of Fedora.

    • Re: (Score:3, Informative)

      by bcmm ( 768152 )
      I've been using the RCs of this kernel, and the Radeon r600 support is already much faster and more stable than fglrx.
    • by fritsd ( 924429 )
      it's very nice but rough around the edges, i.e. not fully OpenGL 2 yet. Openarena plays well but Nexuiz is ah.. extra challenging at the moment :-)
  • by sheepweevil ( 1036936 ) on Thursday December 03, 2009 @10:13AM (#30310372) Homepage
    All of these features are cool and all, but does it solve the well-known XKCD 619 [xkcd.com] bug?
    • by Anonymous Coward on Thursday December 03, 2009 @10:20AM (#30310460)

      http://imgur.com/73EAu [imgur.com] - RESOLVED, FIXED quite a while now!

    • Re: (Score:2, Redundant)

      by Cyberax ( 705495 )

      Yep, it's fixed.

      See here: http://imgur.com/73EAu [imgur.com]

    • by Fished ( 574624 ) <amphigory@nOspam.gmail.com> on Thursday December 03, 2009 @10:23AM (#30310506)

      Like the strip, and it raises a valid point. The bottom line is that kernel development advances more quickly than user interface and applications for the same reason that physics advanced more quickly than say ... psychology. That is, because developing a faster kernel is a much easier problem than developing a fun, usable desktop environment. It's easier to write, easier to test, and easier to debug. People tend to gravitate towards problems that they think they can solve--and ignore the problems they don't understand or don't want to deal with.

      Personally, I think that the best way forward for Linux on the desktop would be to take GNUstep to the next level. There's a LOT of code there already written, and with a bit more work you might be able to have source-level compatibility with Mac OS X--which would give you access to a bunch of commercial apps. And, most importantly, the ability of the OpenStep API to produce a world class desktop--best in the world in fact--is proven. After 10 years, I don't think that either KDE or GNOME have really done all that much for Linux on the desktop... it's time to try a different approach.

      Of course, I'm just kibbitzing, not bringing code. So what right do I have to say anything?

      • Re: (Score:2, Insightful)

        by tulcod ( 1056476 )
        Looks like you didn't get your psychology right. The problem is that creating a desktop environment is, in fact, much /easier/ to create than it is to enhance the kernel, and that makes it extremely boring. Desktop environments are trivial, but dull, to make. They are a perfect example of a job you should be getting paid for.
      • Re: (Score:2, Insightful)

        by suggsjc ( 726146 )

        People tend to gravitate towards problems that they think they can solve--and ignore the problems they don't understand or don't want to deal with.

        I think that should have read

        Engineers tend to gravitate towards problems that they think they can solve--and ignore the problems they don't understand or don't want to deal with.

      • Re: (Score:3, Insightful)

        Personally, I think that the best way forward for Linux on the desktop would be to take GNUstep to the next level.
        [...]
        After 10 years, I don't think that either KDE or GNOME have really done all that much for Linux on the desktop...

        Purely technical solutions to marketing and promotional problems rarely work, so its unsurprising that GNOME and KDE have done much for Linux on the desktop, since their marketing and promotional efforts are pretty minor. Of course, switch technical approaches to focus on GNUste

      • by shutdown -p now ( 807394 ) on Thursday December 03, 2009 @12:03PM (#30312350) Journal

        That is, because developing a faster kernel is a much easier problem than developing a fun, usable desktop environment.

        I disagree, it's not an easier problem. It is, however, a much more interesting problem to solve, especially to skilled hackers.

        One other aspect here is that the target audience is bigger for the kernel. Desktop uptake is still very low, but kernel is used by any device that runs Linux, whether it's a router, a smartphone, a server, or a netbook. This has a side effect of kernel hacking being better financed than desktop development, as there are more commercial players interested specifically in the kernel, who couldn't care less about KDE or Gnome.

        • by abigor ( 540274 )

          Very true. Desktop Linux is a bit player when compared to the overall use of Linux, and it's truly a hobbyist's domain.

        • Re: (Score:3, Interesting)

          by mcrbids ( 148650 )

          Desktop uptake is still very low, but kernel is used by any device that runs Linux, whether it's a router, a smartphone, a server, or a netbook. This has a side effect of kernel hacking being better financed than desktop development, as there are more commercial players interested specifically in the kernel, who couldn't care less about KDE or Gnome.

          If I hadn't already replied in this article, I probably would have modded you up. This point is hard for many to understand, but it's quite possible that the to

        • I disagree, it's not an easier problem. It is, however, a much more interesting problem to solve, especially to skilled hackers.

          Whether or not its an easier problem to solve, overall, its an easier problem for the kind of people who actually write code to define concretely, and validate solutions to, since the skill set needed to do that with that problem is closely related to the skill set of programmers. This is important, because to successfully solve a problem (or, in the case of problems that progres c

      • by theCoder ( 23772 )

        ...developing a faster kernel is a much easier problem than developing a fun, usable desktop environment.

        While I agree with tulcod's response -- kernel development is usually much harder than desktop development. However, there is one important difference. A faster kernel is a measurable goal. While you might be able to make a "fun, usable desktop environment" for a single person, and maybe even for a good percentage of the population, you will never, ever satisfy everybody. Half the people want more op

      • Re: (Score:3, Interesting)

        by javilon ( 99157 )

        Well, GNOME and KDE ( I prefer one of them but it is not relevant to this post ) have done lots for Linux on the desktop. I have been running it for a number of years because I find it more pleasant to use than Windows. And I am not alone.

        And the millions of people using it are doing so against active attacks from a number of organizations. Mainly closed software companies, and also (mainly in the past) political organizations and governments.

      • by bcmm ( 768152 ) on Thursday December 03, 2009 @01:53PM (#30314172)
        You're missing the bit where Flash is closed-source and the people that want it to work properly can't make it happen, whereas the people who can make it work don't want it to happen.
      • I wouldn't mind seeing GnuStep underpinnings with a Mono binding that adds in XAML support. Seems to me like a decent base platform for higher level abstractions.
    • There is nothing the kernel developers can do about this. On my machine which is a dual 2218 opteron with Dual nvidia 8800gts video cards running 4 monitors just playing flash a single screen will bring one cpu core to its knees. Normal sized flash videos will bring one cpu core to its knees and will sometimes drop frames. The same video if saved to a local file and played with xine/mplayer etc will use up 1-2% cpu power at the lowest cpu freq (1 ghz).

      Linux is perfectly capable of smooth video playback at l

      • by Shark ( 78448 )

        It makes one wonder, with Microsoft encroaching on Adobe's turf like this, shouldn't they at least try to cover their ass and give great support on non-MS platforms? I really don't get what's going on as far as their strategy goes... Or maybe there just isn't any strategic planning.

  • by Anonymous Coward

    I'm glad to see Btrfs improving so rapidly. I hope popular distros start including support for it, but more importantly, start using it as the default filesystem.

    It's time for the ext-based filesystems to die. They are a technology that was obsolete a decade ago.

    ReiserFS was set to kill them off, but unfortunately found another victim first... JFS and XFS only work well in certain high-end niches. But Btrfs is much better as an all-around filesystem, which is why it has a chance to finally put an end to ext

    • Re: (Score:3, Insightful)

      ReiserFS was set to kill them off, but unfortunately found another victim first

      Too soon!

    • How does Btrfs compare to ZFS? I've been using ZFS-on-FUSE, and absolutely love the incredible data integrity and volume management features that it provides. The new support for deduplication will also be wonderful once implemented.

      Of course, the performance and the idea of trusting my data to FUSE leave much to be desired.

      (On the downside, I'm peeved that Btrfs is GPL licensed, which will prevent it from becoming "the one true filesystem" from here on out. Windows users will be stuck with NTFS, Linux

      • Re: (Score:3, Insightful)

        by Abreu ( 173023 )

        What prevents other, non-GPL operating systems from using Btrfs?

        Writing drivers for a filesystem is not a "derivative work" is it?

        • by Fished ( 574624 )
          It wouldn't be a derivative work to write a driver if you did so from scratch. But to do so from scratch is... shall we say a "non-trivial problem." It would be better to have a BSD licensed filesystem that could be relicensed as appropriate--GPL for linux, proprietary for Windows and Mac, BSD for .. ahem ... BSD, etc.
        • by Urkki ( 668283 )

          What prevents other, non-GPL operating systems from using Btrfs?

          Writing drivers for a filesystem is not a "derivative work" is it?

          How good, accurate and up-to-date is the specification of btrfs?

          If only real, accurate "specification" is the source code, then it's damn hard to create a compatible and reliable new implementation from scratch. File systems are complex, concurrent (meaning many files being accessed simultaneously) and performance-critical as well as reliability-critical. Getting it right is hard, while getting it wrong is bad, so there needs to be really good reasons to even try to do it, instead of using something that al

          • by Abreu ( 173023 )

            I think I have a (theoretical) solution to that.

            Lets say that the copyright owners of btrfs create the windows/osx/solaris/aix drivers?

            Or more realistically, if the copyright owners of btrfs grant an interested third party a special license to create a lgpl'd btrfs driver?

            • by Urkki ( 668283 )

              I think I have a (theoretical) solution to that.

              Lets say that the copyright owners of btrfs create the windows/osx/solaris/aix drivers?

              Or more realistically, if the copyright owners of btrfs grant an interested third party a special license to create a lgpl'd btrfs driver?

              There can't be a "special license" to do LGPL version. Once such version is out, well, it's out. So they could as well make the whole thing LGPL (which IMHO would be a good idea).

              But if there are a lot of developers, getting everybody to agree (or even to reach everybody) is a lot of work. Replacing=rewriting the code of those who don't agree might be on option, depending on how much of it there is.

              But I'm pretty sure they actually thought about it and chose GPL over LGPL because they wanted to, so they're

              • by Abreu ( 173023 )

                There can't be a "special license" to do LGPL version. Once such version is out, well, it's out.

                Why not? If the copyright owner* grants a license to do a LGPL driver for a non-Linux OS, it doesn't necessarily mean the entire project has to be cross-licensed to LGPL, does it?

                And yes, for the record, cross-licensing the entire thing as GPL/LGPL might be a good idea too.

                * This is assuming "the copyright owner" is just one person or a small group of people who agree on doing such a thing.

      • On the downside, I'm peeved that Btrfs is GPL licensed, which will prevent it from becoming "the one true filesystem" from here on out.

        Well, ZFS itself has a GPL-non-compatible license, but that doesn't prevent it from being usable in Linux as an independent user-space process through FUSE.
        The same approach could be imagined under non-GPL-compatible OS: have the GPL implementation as a standalone userspace daemon.
        (Which is not a bad idea - give more freedom to upgrade)

        Windows users will be stuck with NTFS

        No matter what. Even if some kernel guru released a tri licensed LGPL/BSD/Proprietary perfect file system, Microsoft will still be using NTFS and promising WinFS soon for wha

        • Re: (Score:3, Informative)

          "For removable media, UDF could be a good candidate too. It's getting widespread availability, specially since Microsoft added support for writing on Vista and Win7."

          Getting slightly off-topic, but after the FAT patent-trolling recently this interests me.

          I went and dug up the sadly-neglected udftools package and installed it. Sure enough, the following command (found with a bit of Googling) seems to produce a filesystem on my SD card that can be read from and written to just fine by Linux, Mac OSX (Leopar

        • I really think that MS has gotten a lot better about their NIH approaches. Though they tend to buy out, and hire in anyone developing cool open-source that happens to work in windows.
      • by Bert64 ( 520050 )

        MS will never support a filesystem they don't control unless forced to, and certainly won't make it the default...

        BSD, Solaris and OSX all support UFS, as does Linux.... Linux also supports the hfs+ filesystem currently used by OSX, not sure if bsd/solaris do but there are bsd licensed drivers for it so no reason not to.

        • BSD, Solaris and OSX all support UFS, as does Linux.... Linux also supports the hfs+ filesystem currently used by OSX, not sure if bsd/solaris do but there are bsd licensed drivers for it so no reason not to.

          Linux only sort-of, depends on flavor. I can't reliably mount a CF card r/w from pfSense (FreeBSD) under linux.

      • If Btrfs's design proves to be good, there is not a reason why there can't be both GPL and non-GPL implementations written for it. I think one of the things for universal filesystem to be successful is to have something that has more than one implementation.

        FAT32 will have to die in the market when people get sick of files over 2gb getting truncated. The end is near for FAT.

      • Re: (Score:3, Insightful)

        I've been using ZFS-on-FUSE

        Are you insane? You probably just cut the performance of your drives by 90%.

        Of course, the performance and the idea of trusting my data to FUSE leave much to be desired.

        Oh sorry you're informed AND insane. :P

      • I highly doubt we'll be seeing Microsoft make any effort in terms of impliment anything that would make it play nice with Linux or any other operating system.
      • (On the downside, I'm peeved that Btrfs is GPL licensed, which will prevent it from becoming "the one true filesystem" from here on out. Windows users will be stuck with NTFS, Linux users will get Btrfs, Mac users will get whatever apple is secretly working on, and the BSD/Solaris camp will get to keep ZFS. None of them will be compatible, and FAT32 somehow remains the only viable option for removable media.)

        You may as well stop holding your breath now. Microsoft will never support a general purpose filesys

    • It's time for the ext-based filesystems to die. They are a technology that was obsolete a decade ago.

      ReiserFS was set to kill them off, but unfortunately found another victim first... JFS and XFS only work well in certain high-end niches.

      In my experience, JFS offers most of the benefits of ReiserFS, while being lighter on CPU. So it is definitely not just for the high end. It has also turned out more stable than Reiser, though in the recent years this has evened out.

      On some of my machines there have been consistent problems with using JFS on the root partition, but this may be due to the init scripts. No data has been lost, though, and on non-root partitions JFS has consistently been rock solid for me. This includes a number of x86, Powe

    • If you look at the filesystem benchmarks, JFS is often not the fastest, but scores best in term of CPU usage. I've found that on a netbook which has a very fast disk (ie flash) and not much CPU, JFS is actually the best option. YMMV of course, and I came to this conclusion before ext4 was released, and I haven't trried the pre-release ones like btrfs.

  • by delire ( 809063 ) on Thursday December 03, 2009 @10:16AM (#30310414)
    This 'Per-backing-device writeback' is pretty significant. I'm sure the feature film and database industries will love it especially:

    The new system has much better performance in several workloads: A benchmark with two processes doing streaming writes to a 32 GB file to 5 SATA drives pushed into a LVM stripe set, XFS was 40% faster, and Btrfs 26% faster. A sample ffsb workload that does random writes to files was found to be about 8% faster on a simple SATA drive during the benchmark phase. File layout is much smoother on the vmstat stats. A SSD based writeback test on XFS performs over 20% better as well, with the throughput being very stable around 1GB/sec, where pdflush only manages 750MB/sec and fluctuates wildly while doing so. Random buffered writes to many files behave a lot better as well, as does random mmap'ed writes. A streaming vs random writer benchmark went from a few MB/s to ~120 MB/s. In short, performance improves in many important workloads.

  • by Sockatume ( 732728 ) on Thursday December 03, 2009 @10:19AM (#30310446)

    rewrite of the writeback code

    So you didn't de-lace the interace or uncabulate the turboencabulator? I'm now about 85% convinced that the open source movement is just making shit up.

    • and I'm now about 93.8523% convinced you're making up statistics.
    • So you didn't de-lace the interace or uncabulate the turboencabulator?

      Dude, don't be ridiculous, of course they uncabulated the turboencabulator, how else could they contraplectify the apoplectifier?

  • If KSM puts the KVM module on par with Xen in terms of performance then I think the writing is on the wall for Xen's demise.
    • by 1s44c ( 552956 ) on Thursday December 03, 2009 @10:54AM (#30310976)

      If KSM puts the KVM module on par with Xen in terms of performance then I think the writing is on the wall for Xen's demise.

      No. Not at all. KSM saves memory but hurts performance. It shares memory across virtual machines to save memory.

      Xen can't share memory across virtual machines, it's just not put together like that.

      Performance is about identical for KVM and XEN.

      • Being somewhat ignorant of the inner workings of XEN, VMWare, KVM and the like the very idea that VM's would share memory at all seems rather risky in terms of them being sandboxed from each other. Beside a hypervisor being able to allow many VM's to run basically any OS, it would also seem that there is a security element involved eg: running Windows in a VM and Linux in another and NetWare in yet another the three would not have the ability to know the other were there and therefor be safe from being hac

        • by Bert64 ( 520050 ) <bert@slashdot.fir e n z e e . c om> on Thursday December 03, 2009 @12:14PM (#30312544) Homepage

          Instead of storing multiple copies of the same data in memory, it stores a single read-only copy and points the others to it. If you try to write to it, it traps, creates a new read/write instance which is exclusive to you and then points you at it...

          Shared libraries work in much the same way. Shared libraries been implemented pretty securely for many years now.

          • I grock the lib concept, each program gets it's own data segment and the code is run in a single image ( if you will ) and that conserves memory. Each data area is private to the program unless they explicitly share it through some IPC mechanism.

            This is interesting as it seems like a way to write malware. If I wanted to deliberately run the machine into the ground I could just look for those data area's and keep attempting to write to them and force the OS to keep duplicating them over and over again. Now

            • This isn't possible, as the guest OS cannot get more memory allocated to itself than has been assigned to it. Let's say you have five guest machines that "share" all their memory through this new feature. If you gain access to one of the machines and completely rewrite all the memory it has inside it, you still have only doubled the memory usage as the host os allocates the new memory for the guest os. Rewriting in the same memory multiple times will not increase memory usage any more than that as it will s
      • by Hatta ( 162192 )

        KSM saves memory but hurts performance.

        If I have plenty of memory, can I easily disable KSM?

        • by 1s44c ( 552956 )

          KSM saves memory but hurts performance.

          If I have plenty of memory, can I easily disable KSM?

          Disabling or enabling this should be no harder than one command line option to qemu.

          Disabling it is the right thing to do if you don't need the memory saving benefit. It will waste CPU and throw your stuff out of data caches. I think most people will be more concerned with saving memory though, I know I am.

  • KMS (Score:4, Informative)

    by shentino ( 1139071 ) <shentino@gmail.com> on Thursday December 03, 2009 @10:59AM (#30311084)

    Kernel Mode Switching is great except for the fact that all 3 major video card vendors decided to nix VGA console support.

    • Re: (Score:3, Interesting)

      Comment removed based on user account deletion
  • time saving makefile (Score:5, Interesting)

    by inode_buddha ( 576844 ) on Thursday December 03, 2009 @01:23PM (#30313706) Journal
    I'm very interested in the new make target. Specifically, "make localmodconfig". It seems that this new target will check your current .config, and also check whatever modules are currently loaded. It then creates a new config file which only builds the modules you are currently using. This could be a great time and space saving, as opposed to building everything and the kitchen sink as distros tend to do. It gives you a fairly easy and sane way to truly tweak your kernel to fit your box, or script it to fit a whole bunch of non-similar boxes.
    • by bcmm ( 768152 ) on Thursday December 03, 2009 @01:49PM (#30314116)
      That's sounds potentially very useful, but beware that if it works the way you're describing it, it could remove, for example, support for USB MSC if your USB stick wasn't plugged in when you did it.
    • Re: (Score:3, Informative)

      by gringer ( 252588 )

      There's also a "make localyesconfig" that will be even more useful for me, particularly for removing the need for initrd. I can now do a "make localyesconfig", and not have to try to guess what particular combination of compiled-in options is required for the computer to start up, then add in the additional things as modules.

The way to make a small fortune in the commodities market is to start with a large fortune.

Working...