Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Upgrades Linux

Linux 2.6.38 Released 159

darthcamaro writes "The new Linux 2.6.38 kernel is now out, and it's got a long list of performance improvements that should make Linux a whole lot faster. The kernel includes support for Transparent Huge Pages, Transmit Packet Steering (XPS), automatic process grouping, and a new RCU (Read/Copy/Update)-based path name lookup. '"This patch series was both controversial and experimental when it went in, but we're very hopeful of seeing speedups," James Bottomley, distinguished engineer at Novell said. "Just to set expectations correctly, the dcache/path lookup improvements really only impact workloads with large metadata modifications, so the big iron workloads (like databases) will likely see no change. However, stuff that critically involves metadata, like running a mail server (the postmark benchmark) should improve quite a bit."'"
This discussion has been archived. No new comments can be posted.

Linux 2.6.38 Released

Comments Filter:
  • Kernel Newbies link (Score:5, Informative)

    by Anonymous Coward on Tuesday March 15, 2011 @05:42PM (#35497644)
    Informative as usual: http://kernelnewbies.org/Linux_2_6_38 [kernelnewbies.org]
    • by Anonymous Coward

      "B.A.T.M.A.N. Mesh protocol"
      Now things are getting good.

  • Isn't this the version that 200-line patch was slated for?
    • Isn't this the version that 200-line patch was slated for?

      I'm pretty sure that's what "automatic process grouping" is.

      • aka "the wonder patch".

        As someone who knows bugger all about Linux, can anyone confirm if that patch will have any kind of impact on Android Devices or is it the kind of thing only a desktop user will see a difference with?

        • by blair1q ( 305137 )

          i think that's the wonder of it

          because i wonder what it will do, too

          albeit, i haven't followed kernel fixes for years

          i imagine someone's found a way to fake priority by treating a group of processes as one process when allocating cpu, because it solves one problem someone was having while causing someone else a problem

          the example was forking 20 compile processes. normally that's a big speedup because when one has to pend on some i/o, another can pick up and do some work on your overall compile. with this n

          • Re:200-line patch (Score:5, Informative)

            by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Tuesday March 15, 2011 @06:25PM (#35498114) Homepage Journal

            the example was forking 20 compile processes. normally that's a big speedup because when one has to pend on some i/o, another can pick up and do some work on your overall compile. with this new scheduling instead of 20 new processes crowding the few existing processes into much less cpu, now the 20 processes only act like one new process which makes me wonder why you'd fork 20 processes any more, since they'll have only one process' share of the resource. might as well run them sequentially; it'll take almost exactly as long

            Say you have regular desktop programs that take some small amount of CPU, and you want to be able to compile things a quickly as possible without making your music skip or your window manager get laggy. Before this you would have to guess at the right number of compile processes to run; too few and it takes longer and doesn't use all your CPU, too many and your desktop gets laggy. Now, the scheduler treats all of the compiler processes as a group, and lets your music player and window manager steal CPU cycles from them more easily -- so you can run more processes and keep the CPU busy, without worrying about your music skipping.

          • Re:200-line patch (Score:5, Informative)

            by Tetsujin ( 103070 ) on Tuesday March 15, 2011 @06:27PM (#35498140) Homepage Journal

            i think that's the wonder of it

            because i wonder what it will do, too

            albeit, i haven't followed kernel fixes for years

            i imagine someone's found a way to fake priority by treating a group of processes as one process when allocating cpu, because it solves one problem someone was having while causing someone else a problem

            the example was forking 20 compile processes. normally that's a big speedup because when one has to pend on some i/o, another can pick up and do some work on your overall compile. with this new scheduling instead of 20 new processes crowding the few existing processes into much less cpu, now the 20 processes only act like one new process

            which makes me wonder why you'd fork 20 processes any more, since they'll have only one process' share of the resource

            That's not quite right.

            Basically, there are lots of conditions that could cause any process to give up its time slice. Your network application may be waiting for packets to process. Your video player may have decoded all the compressed video it needs for the moment, etc. The idea here is that certain programs, even if they're not doing a whole lot of work at any given time, still need frequent service so they can keep doing what they need to do.

            If your machine were running 3 processes (in separate groups) and you ran another 20 in a single group, those 20 processes wouldn't wind up limited to 25% of the CPU time. In all likelihood, they'd continue using the lion's share of the machine's resources until the job is done.

            What this scheme does do is help out those other three processes: instead of getting 1 time slice each out of every 23 to see if they have work to do, they'll get one out of every four (via group scheduling). If they have a bunch of work to do, this means they'll effectively have higher priority than the individual processes in that big job. But if they're largely idle, the big job will be able to consume the left-over CPU time.

            So it's not a perfect system, and it's not any kind of CPU quota system or QoS system, it doesn't really restrict what processes on the system can do. It's a hint for the scheduler, to try and give priority to processes that need it.

          • Say you have a machine with 16 cores total. Person A runs a compile that runs on 16 processes. Person B runs a program that runs on 1 processor.

            The old model says that Person A gets 16 shares, and Person B gets 1 share of CPU time. The new model says person A and B both get 50% share of the CPU time.

            In the old model, Person A will hog all of 15 cores, and end up using about half of the 16th core. Person B will only be able to use 50% of one core. In the new model, Person A will be able to use all of 15

            • by lee1 ( 219161 )
              Does this mean that it's as fast as BeOS now?
            • by blair1q ( 305137 )

              Now that's an interesting wrinkle. How does this scheme really interact with multiple processors and hyperthreading? If I have a 4-core, 2-hyperthread system (8 total effective schedulable CPU resources), and I have 3 processes running in 3 sessions then start 16 processes in another single session, all at the same priority, how are those processes scattered across my 16 hyperthreads when all of them get semaphored to wake up?

              I'm starting to think more that this fix, while not deleterious to any previous

              • Agreed. Most of the time it shouldn't make difference. It's more advantageous for the case when somebody or some process spins off 100s of processes, or a server service goes out of control and spins off 1000s of processes. Before it would eat up all the CPU cycles, preventing any of the other critical services from doing anything, and likely crashing the server. Now that out-of-control service has no more resource potential than some other service that's functioning correctly.

                It's not like this patch c

          • the example was forking 20 compile processes. normally that's a big speedup

            It's a forking hell.

        • by Junta ( 36770 )

          I would say most dektop users won't even notice this. It may prevent fork-heavy things like Chrome from starving other things, but the best cases to demonstrate was 'make -j ', in a terminal, which isn't particularly indicative of most user load.

          • I would say most dektop users won't even notice this. It may prevent fork-heavy things like Chrome from starving other things, but the best cases to demonstrate was 'make -j ', in a terminal, which isn't particularly indicative of most user load.

            Umm.. a poster above spoke about this patch in terms of Person A and Person B, so I am assuming it's a patch which distributes computing resources in a more fair manner among users accessing the system simultaneously. So, no desktop user will ever notice this as only one user accesses the machine ever. If this is what it is, it is extremely similar to a project I did two semesters back. xD

            • Umm.. a poster above spoke about this patch in terms of Person A and Person B, so I am assuming it's a patch which distributes computing resources in a more fair manner among users accessing the system simultaneously. So, no desktop user will ever notice this as only one user accesses the machine ever. If this is what it is, it is extremely similar to a project I did two semesters back. xD

              It doesn't really have anything directly to do with people. Instead of person A and B, think of it as process group A and B. The A and B can be associated with two difference people or one person and two different processes.

              So to be absolutely clear, most desktop users absolutely will see a difference. Using the traditional example, you can now do a massive compile while listening to music and browse the web without any noticeable effect on music playing and web browsing, all the while the background compil

              • by Junta ( 36770 )

                most desktop users absolutely will see a difference...you can now do a massive compile

                I stand by my point. *Most* desktop users aren't doing that. Chrome is probably the most 'mainstream' application that would produce a potentially busy large process group depending on tab count, but on the other hand they are genreally only interacting with that application at the time.

                This is not to take away how much this change just makes sense and how much it can do for certain cases, but the way people talk it up sets expectations way too high.

        • If Google decides to include this patch with their fork of the kernel, then yes. But the two kernels, while essentially the same, are two different branches of the same tree now, really*. Google may go ahead and put a lot of this into their kernel, but they might not. I wouldn't ever *expect* it to go into Android, personally, but I may just be quite happy if it does.

          * I may be off about this, as I haven't kept up too well on the details, but last I ever heard, the Linux kernel as used by the desktop distri

        • Re:200-line patch (Score:5, Informative)

          by Wrath0fb0b ( 302444 ) on Tuesday March 15, 2011 @06:16PM (#35498026)

          As someone who knows bugger all about Linux, can anyone confirm if that patch will have any kind of impact on Android Devices or is it the kind of thing only a desktop user will see a difference with?

          The Android kernel and the Linux kernel are pretty much irreparably forked, after the Linux people (perhaps rightly, I don't know) refused to accept the Android patches back into the trunk over the wakelock controversy [slashdot.org]. Unfortunately, the rift there never healed and there was never any real resolution [lwn.net].

          In order for this to apply to Android, Google would have to port the changes over.

          • I didn't know about this, that was quite a fascinating read. However, I did find the last line of the first article particularly amusing - "As for me, I think I'll look into getting a Nokia N900. It looks much more open, with the code mostly all upstream, and a much more active developer community.".
            Hindsight is a wonderful thing.

            • All those things are still true.

              Yes, it's a bummer that Nokia have screwed up, but the N900 is still the only game in town if you want to run Linux on a phone.

          • Re:200-line patch (Score:5, Insightful)

            by vga_init ( 589198 ) on Tuesday March 15, 2011 @08:21PM (#35499020) Journal

            In my opinion, Android isn't the first Linux-based project to rely on a custom kernel. I've seen many such systems pop up in the industry, most of them dead and gone now. The reason is that once the fork has been created, it falls out of development and becomes obsolete after a time. The Linux kernel has been customized and forked by projects countless times. What's going to happen is that the fork is simply going to become outdated and once it's obsolete, the current Linux kernel will have to be forked yet again. Re-forking becomes inevitable part of the project's continued development.

        • I don't see it having much if any impact on an Android device. In fact, relatively few desktop users will actually see any improvement. I'm not smart enough to get technical - but everything I've read seems to say that unless you are a multitasker who works his desktop pretty hard, the improvements will mean little to you.

          One of the little tests that was offered to prove the usefulness of the wonder patch, was to do some routine things on your desktop, loading the CPU up near capacity. Once all your stuf

          • Possibly your Android's not as useful as my N900, but I routinely -do- load it up with a half-dozen programs and another half-dozen web windows. It works fine, but if you try to do any task that does compression/decompression... it lags things down. So, no background apt updates if you want to work.
            I'd certainly make use of some of this... but I probably won't be able to unless someone backports it to 2.6.28 due to the damn proprietary graphics drivers.

            • I'd certainly make use of some of this... but I probably won't be able to unless someone backports it to 2.6.28 due to the damn proprietary graphics drivers.

              They are in user space aren't they? AFAIK there is no closed code in the N900 kernel.

        • If you know bugger all about Linux, you probably wouldn't even see any difference on a desktop. This patch mainly do wonders for developers doing big parallel builds, and some automatic improvements if you have a classic multi-user server, but all the same improvements were available before this patch too, they just weren't automatic.

          • That said. I find it cute that people wants to improve the user-experience when compiling big jobs on their android phones. That picture in itself, is something I highly approve of.

        • I'm honestly not sure about stock kernels and even that likely varies from manufacturer to manufacturer, but many (most - all) third party roms do use process groups to allow for priority assignment and are heavily tweaked in this regard. This is one of the reasons why many third party roms seem to be more responsive than factory roms.

          While I can't say for sure, I wouldn't be the least bit surprised if this finds its way into android as this type of technology can go a long way toward reducing interface lat

      • Re:200-line patch (Score:5, Informative)

        by butalearner ( 1235200 ) on Tuesday March 15, 2011 @06:23PM (#35498096)

        Isn't this the version that 200-line patch was slated for?

        I'm pretty sure that's what "automatic process grouping" is.

        Yup. Some links:

        • This LWN [lwn.net] talks about the switch from TTY-based grouping to session ID-based grouping.
        • Lennart Poettering's alternative solution using cgroups [webupd8.org], which works perfectly fine as long as you don't care the changes are in user space (i.e. you have to manually set this up on each computer).
        • Another alternative is using Con Kolivas' BFS [wikipedia.org], which reportedly shows similar improvements, not to mention actually pays attention to nice levels. Of course you actually have to build your own kernel, or get it from someone else, or use a distro that uses it by default like PCLinuxOS or Zenwalk.
        • by kobaz ( 107760 )

          Another alternative is using Con Kolivas' BFS [wikipedia.org], which reportedly shows similar improvements, not to mention actually pays attention to nice levels.

          How do the current built in schedulers handle nice levels?

        • by Timmmm ( 636430 )

          > the switch from TTY-based grouping to session ID-based grouping.

          All GUI apps have the same session ID, so I don't see how this will affect things at all. Behold:

          thutt@panic:~$ ps -eo session,pid,cmd | grep
          1630 12732 /home/timmmm/Matlab2010b/bin/glnxa64/MATLAB -desktop
          1630 19095 lyx
          1630 30014 /home/timmmm/Projects/lastfm-linux/Lastfm/lastfm-1.5.4.26862+dfsg/bin/last.fm
          1630 30605 gnome-terminal
          1630 29914 /opt/google/chrome/chrome

          So if I do some crazy computation i

  • Misleading article (Score:5, Insightful)

    by Edmund Blackadder ( 559735 ) on Tuesday March 15, 2011 @06:37PM (#35498234)

    It is great news that the Linux kernel performance keeps improving, and nowadays you can get the fastest performing commonly used OS for free. But I have to point out that the way the slashdot summary was written is misleading. The slashdot summary has the following quote:

    '"This patch series was both controversial and experimental when it went in, but we're very hopeful of seeing speedups," James Bottomley, distinguished engineer at Novell said. "Just to set expectations correctly, the dcache/path lookup improvements really only impact workloads with large metadata modifications, so the big iron workloads (like databases) will likely see no change. However, stuff that critically involves metadata, like running a mail server (the postmark benchmark) should improve quite a bit."'

    If you read the actual article you will notice that this quote refers only to the RCU portion. Other aspects like transparent huge pages are not controversial and they will improve database performance.

    • But I have to point out that the way the slashdot summary was written is misleading.

      You must be new.

  • by diegocg ( 1680514 ) on Tuesday March 15, 2011 @06:42PM (#35498284)

    B.A.T.M.A.N. mesh protocol (which helps to provide network connectivity in the presence of natural disasters, military conflicts or Internet censorship)

    Looking at what happened recently in Japan, Lybia or Egypt...it seems a feature that I would like to have in my system. Just in case...

    • Unfortunately, if you saw the picture of the rebel media center in Benghazi at BoingBoing, those guys are using Windows XP. How can we promote free and legal alternatives to free and illegal (cheap XP CDs)?

      • by snadrus ( 930168 )
        They want "what the rest of the world uses" == "what's been advertised". Promote it with CDs, Google Trends for Linux vs Microsoft Windows (especially if excluding the US). Legal doesn't matter to a country in revolution, but freedom (knowing it's not a spy/sabotage platform) via peer-review does.
  • by kwerle ( 39371 ) <kurt@CircleW.org> on Tuesday March 15, 2011 @06:53PM (#35498372) Homepage Journal

    What does that mean? Is it like 20% faster? I dunno - I think 5% would be a lot faster. But .5% or less? what are we talking about, here?

    • by Ant P. ( 974313 )

      THP makes memory-heavy stuff anywhere up to 5% faster based on some quick testing I did with it on folding@home.

      • by kwerle ( 39371 )

        THP makes memory-heavy stuff anywhere up to 5% faster based on some quick testing I did with it on folding@home.

        5% is pretty impressive for a kernel update.

  • by Anonymous Coward

    2.6.36 to 2.6.38? Tell me when there's an actual update.

    (Fair turnaround for the bitching about Apple's "minor" 10.x.y updates.)

    • Fair Turnabout implies the situation is the same, but turned around. That is not the case here. Difference being that Linus is NOT out there behind a podium telling a crowd of people that "this is best brand New Version of Linux EVER, and it's totally different and better"
      Without a figurehead idiot trumping up this update its just not the same thing. You can't really blame the Linux community for being interested in a update to their software can you? It's not like they are making grandiose claims or ho

  • Transparent Huge Pages

    It doesn't matter what that is*, it's got "Buy Me!" written all over it!

    OSX might have the Dock, and Windows might be up to version 7, but my Ubuntu machine has Transparent Huge Pages!


    *save your breath, I actually looked it up. [lwn.net]
    • by Mr Z ( 6791 )

      I have 16GB RAM in my 64-bit machine at home. I actually look forward to THP, since any process that actually benefits from that much RAM would also benefit from THP if it isn't already using hugetlbfs. As far as I can tell, hugetlbfs almost never gets used, so that means just about any process that I'd run that needs that much RAM would benefit from THP.

      The other day I edited a page-sized full color scan in The Gimp at 600 DPI without swapping. That was actually pretty cool. That's an app that'd benefi

      • Would you mind sharing the specs on your rig? I might be needing to buy a new system this year.

        • by Mr Z ( 6791 )

          It wasn't anything too special. I built a pretty basic Phenom II x4 box. Here's a cut/paste from my Newegg receipt, minus prices.

          • 1 x CASE NZXT| PHAN-001WT RT
          • 1 x PSU ROSEWILL|RBR1000-M 1000W RT
          • 4 x MEM 4G|CORSAIR XMS CMX4GX3M1A1600C7
          • 1 x HD 1.5T|WD 7K 64M WD1501FASS % - OEM
          • 1 x CPU AMD|PH II X4 965 3.4G AM3 RT
          • 1 x MB GIGABYTE|GA-880GA-UD3H R

          Now, I didn't buy a fancy video card. That total rig, though, cost me less than $1K and that was a few months ago.

    • The Dock is the opposite of a selling point.

      Wait, I take that back--it does look awesome on a demo machine, so I guess it helps sales after all.

    • by jabjoe ( 1042100 )
      Go on then, I'll bite.
      That's not Linux's problem, it's Adobe's and the graphics card manufacturers. Loads of reimplimenting of closed stuff needs to happen for it to be Linux's fault. (That's Linux as a platform, not as just a kernel) With Gallium/DRM/KMS/Wayland/etc and HTML5 hopefully it will be Linux's problem and will all go away nicely. Having said all that, works ok for me now with the closed Flash player and the closed NVidia drivers. It's just unpalatable (and you are left in the slow lane of X dev
  • by Anonymous Coward

    If they are never going to go past 2.6 shouldn't we just start saying Linux 38 is out. Then you could get everyone else to ditch Windows 7 because Linux 38 is like more than 5 times better.

    • by NotBorg ( 829820 )
      That be version 38 of the 2.6 kernel. If you want a flat single number version, it would be higher than 38, right? Eat kernel shorts Google Chrome!!!!
  • I can't wait until I can switch over from Windows. All I'm waiting for is Direct 3D 11 support and fast stable graphics drivers. When I don't need to dual boot to play my games the way they were meant to be played, then I'm running on over.
    • by snadrus ( 930168 )
      D3D11 has a Gallium state tracker now. Getting Wine using it's another story. Of-course the Gallium drivers are being reverse-engineered for NV & ATI since they won't share spec, code, or blobs, but this kernel & the next both have serious improvements in those areas. I recently got an Intel Sandy-Bridge laptop (i5) since it's got the best open-source video card built-in at the moment. It plays StarCraft 2 nicely in Linux. It doesn't compare to $1000 Windows rigs but was half that price total and pl
  • Fucked up free Radeon drivers. Back to Fglrx for now. Crap.

    This release adds support for the AMD Fusion GPU+CPUs

    Only if you want your box to reboot if you try a serious game [phoronix.com]. Maybe next time.

  • by jones_supa ( 887896 ) on Wednesday March 16, 2011 @04:34AM (#35501410)
    I'm still curious about SSD support. Yes, TRIM command is supported but is it actually connected to the filesystems? Partition alignment is also a bit of a mystery, but according to some rumors, recent Ubuntu installers get it right.
    • by Skapare ( 16644 )
      Proper partition alignment has been a long term issue that was ignored for at least a decade. Now that it is happening, the common alignment (1M, 2048 legacy sectors) might now need to change (2M, 4096 legacy sectors). In a few years, people might even get over the loss of another 2M of dirt cheap disk space (don't forget that we're all going to GPT which has another partition table at the end of the drive).
    • TRIM support works on recent kernels, and EXT4 filesystems, for me. You just have to make sure you mount with -o discard

  • I've been running using dmraid in RAID 5 configuration - I need to use Intel's fakeraid to facilitate dual-booting to Windows. Does this commit

    http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=9d09e663d5502c46f2d9481c04c1087e1c2da698 [kernel.org]

    mean that I no longer need to hunt patches for dmraid5 but can just use vanilla kernel from now on?

  • http://kernelnewbies.org/Linux_2_6_38 [kernelnewbies.org] says:
    > Core:
    > - Add hole punching support to fallocate() (commit)

    Good to see the efforts towards backward compatibility, I often wondered why punchcard support was lacking in previous releases.

Be sociable. Speak to the person next to you in the unemployment line tomorrow.

Working...