Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

The ~200 Line Linux Kernel Patch That Does Wonders 603

An anonymous reader writes "There is a relatively miniscule patch to the Linux kernel scheduler being queued up for Linux 2.6.38 that is proving to have dramatic results for those multi-tasking on the desktop. Phoronix is reporting the ~200 line Linux kernel patch that does wonders with before and after videos demonstrating the much-improved responsiveness and interactivity of the Linux desktop. While compiling the Linux kernel with 64 parallel jobs, 1080p video playback was still smooth, windows could be moved fluidly, and there was not nearly as much of a slowdown compared to when this patch was applied. Linus Torvalds has shared his thoughts on this patch: So I think this is firmly one of those 'real improvement' patches. Good job. Group scheduling goes from 'useful for some specific server loads' to 'that's a killer feature.'"
This discussion has been archived. No new comments can be posted.

The ~200 Line Linux Kernel Patch That Does Wonders

Comments Filter:
  • by suso ( 153703 ) * on Tuesday November 16, 2010 @09:50AM (#34241136) Homepage Journal

    Compiling the kernel isn't a useful benchmark. How well does it deal with running Adobe Air?

    • by spiffmastercow ( 1001386 ) on Tuesday November 16, 2010 @09:52AM (#34241144)
      Obviously you're not running Gentoo.
      • by suso ( 153703 ) * on Tuesday November 16, 2010 @09:53AM (#34241154) Homepage Journal

        I used to. For 3 years. But I wanted my time back.

        • by Albanach ( 527650 ) on Tuesday November 16, 2010 @10:40AM (#34241646) Homepage

          I used to. For 3 years.

          Wow, I think my K6 pentium clone could compile the kernel faster than that!

          • by Inda ( 580031 ) <slash.20.inda@spamgourmet.com> on Tuesday November 16, 2010 @11:10AM (#34241990) Journal
            If it takes less than 3 years, you're doing it wrong.
        • Re: (Score:3, Informative)

          I know exactly what the problem is. Try passing different elevator options to the kernel at boot time. It helps with clock skew on virtual environments.

        • by Anonymous Coward on Tuesday November 16, 2010 @11:22AM (#34242146)

          recompile the Gentoo kernel using --mph=88

        • by Joey Vegetables ( 686525 ) on Tuesday November 16, 2010 @12:00PM (#34242722) Journal
          Only 3 years? How can you call that a fair chance? It's not even enough time to compile KDE!! :) (disclaimer: mostly happy Gentoo user here, but yes, like many other worthwhile things, it can come at a cost, mainly in hopefully unattended compile time, and occasionally, if you want the latest stuff, some user/administrator time as well trying to figure out what the ebuild maintainers were smoking when they did certain things.)
    • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday November 16, 2010 @10:04AM (#34241292) Journal
      They aren't compiling the kernel to see how long it will take(which, as you say, is rarely of all that much interest, few people do it and a fast build-box isn't going to break the budget of a serious project), they are using a multithreaded kernel compilation as an easy way to generate lots of non-interactive system load to see how much that degrades the performance, actual and perceived, of the various interactive tasks of interest to the desktop user.

      This isn't about improving performance of any one specific program; but about making a reasonably heavily-loaded system much more pleasant to use. Compiling the kernel is just a trivial way to generate a large amount of non-interactive CPU load and a bit of disk thrashing...
      • by morgan_greywolf ( 835522 ) on Tuesday November 16, 2010 @10:59AM (#34241894) Homepage Journal

        Yep. For those that haven't tried it without the patch, a multithreaded kernel compile will typically peg a modern multicore CPU at 100% and will even give you drunken mouse syndrome. Just being able to scroll around a browser window while doing a lengthy make -j64 is impressive. Being able to watch 1080p video smoothly is ... astounding. Especially when you consider the minimum CPU requirement for 1080p H.264 playback is a 3 GHz single core or a 2 GHz dual core.


        • by CAIMLAS ( 41445 ) on Tuesday November 16, 2010 @03:17PM (#34246010) Homepage

          I remember, about 8-10 years ago, how this wasn't the case. It was quite evidently better than Windows in this regard, particularly if you didn't upgrade your hardware on a 2-year cycle (eg. waiting until it died) or tried running on significantly older hardware. Performance, on the desktop, was great. 2.6 seems to have progressively nixed it.

          The first time I tried compiling a kernel, I was astounded at how I was still able to play fullscreen 600Mb encoded DVDs (can't remember what they were encoded in, but the quality was decent).

          I remember building a kernel in '99-2000 or so on a P133 with 64Mb of RAM (running Stormix). Netscape was still responsive. Switching to a different application did't really take all that long.

          These days, Linux performance on the desktop, this regard, is worse than Windows. It's a fucking travesty. Using the anticipatory scheduler helps significantly (or did, until they removed it from the kernel), but it was hardly much more than a stopgap measure.

          I am pleased as fucking punch that this is finally 'fixed'. Like, I'm giddy to the point where I doubt I'm going to get any work done today.

          Where can I download prebuilt kernels for my distro of choice? Surely someone is building them.

      • Re: (Score:3, Funny)

        by Greyfox ( 87712 )
        Ahh you kids with your "I don't care how fast the kernel compiles!" Back in my day it used to take overnight on a 386SX/16 with a whopping 4MB of RAM! And that was AFTER spending hours downloading it across our 1200kbps modems! That connected to PHONE lines! And we LIKED it that way! Well... We really didn't. At some point the kernel source grew to over 10 mb (Remember when we predicted that it doing so would kill the Internet?) and started taking less than 10 minutes or so to compile, and the internet is s
        • by morgan_greywolf ( 835522 ) on Tuesday November 16, 2010 @11:44AM (#34242496) Homepage Journal

          1) There are no 1200 kbps dialup modems. The fastest ones do 56 Kbps under specialized conditions.

          2) If you meant to say 1200 bps modems, well, by the Linus wrote and released the first versions of the Linux kernel, 1200 bps modems and the 386SX were both well obsolete. The most common systems of that day were 486DXs running at 33 MHz at the sweet spot and 50 MHz at the high end. Most people had modems capable of greater than 9600 BPS (many around 14.4Kbps and 28.8Kbps)

          3) Now quit making fun of us real old-timers and get off my lawn .

          • by eln ( 21727 ) on Tuesday November 16, 2010 @01:29PM (#34244272)
            Not all of us had money to keep upgrading our equipment. I was running on a 2400 baud modem until 1995. Of course, I installed Linux on my home box in those days by downloading Slackware to a ton of 3.5" floppy disks at the computer lab at the local university and bringing them all home. If one of the floppies was corrupted, I had to wait until the next day to go back and re-download and copy it.

            I also had to walk 10 miles, in the snow. Uphill both ways.
            • Re: (Score:3, Informative)

              by Z00L00K ( 682162 )

              To continue:

              Baud != bps.

              Baud is modulation changes per second, and in each modulation change there may be a representation of one or more bits which means that the modem may be 1200 baud but you got 9600 bps out of it due to the modulation (phase and amplitude of tone)

              And the old Telebit Trailblazer modems with PEP protocol - they were fantastic in crappy conditions. Multi-carrier technology so that even if there were interference at least a few got through anyway and the only effect was that the bandwidth

          • by Z00L00K ( 682162 )

            Eh? 300bps acoustic coupler using a Z80 based computer with 16k RAM and a tape recorder as secondary storage device.

            That was interesting times.

            And hacking on an ASR33 teletype with a paper roll and punch tape. Been there done that... Errors preserved permanently on the input device (paper roll). Earplugs were recommended.

    • by laffer1 ( 701823 ) <luke@@@foolishgames...com> on Tuesday November 16, 2010 @10:07AM (#34241316) Homepage Journal

      Compiling the kernel doesn't prove true userspace improvements, but it does show an improvement with scheduling.

      I see. It creates "groups" based on the tty and then tries to even out the CPU utilization between groups. This helps if there is a crazy background process eating up CPU and it might even help control flash crushing system performance a bit.

    • by arivanov ( 12034 ) on Tuesday November 16, 2010 @10:38AM (#34241622) Homepage

      The answer is very very very badly.

      This is a "NERD Feature" patch which does very little to the improve the way Joe Average Luser uses his desktop. In fact it leads to some seriously goofy allocation artefacts.

      What it does (if I read it right) is that it puts all processes with the same controlling TTY into the same group. Well, anything launched in X has no controlling TTY. So it all gets lumped into one group. Now you open a xterm and you launch something from there. Miracle, halleluyah, that actually got into a separate schedule group which can now hog a CPU while the rest of apps will fight for the other one on a two core machine. So what am I supposed to do? Start every one of my X applications from a different Xterm so they have a different controlling TTY (and do not close any of them)?

      Screw that for laughs.

      Process grouping is too difficult to be done based on such simplistic criteria. It is best to provider an interface through which a user can group all of the processes with his UID and leave the Desktop environment do the grouping. Or put something on the dbus which listens and follows who talks to whom to do the same. This will provide much better results than putting yet another simplistic euristic in the kernel.

      • Re: (Score:3, Insightful)

        by h00manist ( 800926 )
        Indeed everyone that asks me about migrating to Linux is asking about "can I run xyz programs". The answer to that question is generally what matters most. It's not an easy question, there are no easy answers to it, but it's the most relevant.
    • by Zero__Kelvin ( 151819 ) on Tuesday November 16, 2010 @11:02AM (#34241916) Homepage

      "Compiling the kernel isn't a useful benchmark. How well does it deal with running Adobe Air?"

      Obviously you've never compiled a kernel passing -j64 to the make process. If you had, you would know that all your CPUs would be pegged (indeed -j7 pegs all 8 cores on my laptop.) Of course that is not the benchmark part. The point is that with an "all your processors are belong to kernel build" condition, group scheduling allows there to be essentially no perceived slowdown for interactive GUI/Window manager based computing. You probably should have figured out that you were missing something when Linus Torvalds didn't object to the benchmark and the results. Seriously, this is a major point to understand: When it comes to the kernel, in a fight between your knowledge and Linus', Linus wins.

  • teh snappy!!!! (Score:5, Insightful)

    by schmidt349 ( 690948 ) on Tuesday November 16, 2010 @09:52AM (#34241148)

    Considering that UI lag was always my big problem with anything Linux-based (hell, it even seems to affect Android phones), this might be one small patch for the kernel, one giant leap for userspace...

    • by ArsenneLupin ( 766289 ) on Tuesday November 16, 2010 @10:08AM (#34241324)
      Good luck, Mister Gorsky!
  • "windows could be moved fluidly"

    Damn was I the only one thinking about migrating virtual machines from one box to another , until I read it twice ?

    Damn, I need a vacation :)

  • Distros? (Score:5, Funny)

    by SIGBUS ( 8236 ) on Tuesday November 16, 2010 @09:59AM (#34241228) Homepage

    Of course, how many years from now will that filter into the distros? My guess:

    Gentoo: soon
    Debian Unstable: 2Q 2011
    Ubuntu, Fedora: 1Q 2012
    Debian Stable: 2015
    RHEL: 2020

  • Wait.... (Score:5, Funny)

    by fuzzyfuzzyfungus ( 1223518 ) on Tuesday November 16, 2010 @09:59AM (#34241230) Journal
    That's not a kernel patch... That's a bash script that forcibly installs BeOS!
  • Subjective (Score:2, Interesting)

    by falldeaf ( 968657 )
    I guess it's somewhat objective and dependent on your hardware anyway but even with lots of programs open on my main machine I don't notice any slowdown... I think if linux is really gonna pull ahead of the pack they aught to take a gamble on a new, useful interface. Something like 10gui [10gui.com]. The risk could even be mitigated by having a choice to load either type of desktop at the beginning with a quick video to demonstrate the difference. :)
    • 10gui just _looks_ nice to the naive, and probably OK for people who only can cope with a few windows open at a time. But I don't see how it's actually going to be faster in task switching than using alt-tab, or clicking on task bar buttons.

      The 10GUI interface would just get in my way a lot.

      I often have about 30 windows open (I have a double height taskbar) - ssh sessions, browsers for work, browsers for nonwork (e.g. slashdot :) ), IM windows, editor, email, virtual box machines, file managers, it all star
  • by Culture20 ( 968837 ) on Tuesday November 16, 2010 @10:00AM (#34241236)
    I thought a few years ago, there was a desktop friendly scheduler rejected because Linus thought the server environment was more important. The details escape me.
    • by rwa2 ( 4391 ) * on Tuesday November 16, 2010 @10:14AM (#34241388) Homepage Journal

      They mention the "Con Kolvias" scheduler in TFA, but they don't seem to want to refer to it by its real name:

      http://en.wikipedia.org/wiki/Brain_Fuck_Scheduler [wikipedia.org]

      It doesn't scale well past 16 cores, which is why Linus doesn't want to include it in the main kernel. But it's included in custom kernels for mobile devices, such as CyanogenMOD for my Android phone.

      • by tenchikaibyaku ( 1847212 ) on Tuesday November 16, 2010 @10:26AM (#34241514)
        This is not the scheduler that the grandparent would be referring to though. BFS has been around for about a year, and has as far as I know never actually been pushed for inclusion.

        The previous scheduler that Con wrote was rejected in favor of CFS which is currently in use by the kernel. CFS is at least partly based on ideas from Con, and he was also credited for them.
    • Con Kolivas worked on his staircase scheduler and various performance patches for years. They were routinely demonstrated to be a major improvement. Linus kept saying he was concerned the tradeoff of desktop performance would come in other environments, even though that wasn't true.

      Since benchmark after benchmark showed staircase was vastly superior to what was in mainline, Linus then went to go after the guy rather than the code. He said Kolivas couldn't be trusted to support his code, and thusly it would never be accepted mainline. Reality was that Kolivas had been responding to criticism and updating his patchset for over 3 years, constantly improving it. In addition to the LKML, he also maintained his own support mailing list.

      I'm a Linus fan 95% of the time, but it was a really shitty move, and it drove Kolivas away from contributing. He quit coding for a while. Then after Linus argued for years how this was a bad idea, suddenly the mainline kernel developed the Completely Fair Scheduler overnight, which was very similar to Kolivas' Staircase scheduler. Linus never admitted he had been a dick for years arguing against the thery of the scheduler rewrite. He then pushed the brand new, untested scheduler into mainline.

      CFS is better than what we had before, but it still lost in benchmarks to Staircase, and it was new, untested code.

      Now, Kolivas came out of retirement with a new scheduler called Brainfuck that is even faster, but he has no intention of ever tryint to get it in the mainline kernel.

      • by kangsterizer ( 1698322 ) on Tuesday November 16, 2010 @11:53AM (#34242618)

        Actually Linus lost track of many such things because too self centered or ego driven (which happens to most of us when you such success and things to deal with but anyways)

        This very patch is a prime example, if it goes *default* in the kernel.
        It's a patch that favors *only* Linus's usage of linux: Browse the web while his kernel compiles. Now imagine you start your video from a tty (mplayer blah_1080p.avi) and it takes 95% cpu to be smooth, then you start your browser.. uho.

        In BeOS I could^H^H^H *can* (haiku, too) start 5 videos on and old PC, browse the web and:
        - the browser is smooth like butter
        - the 5 videos are a bit choppy (no miracles but that the point) but they all run at the same speed

        Now _that_ is what I want a desktop scheduler to be like.

        With more criticism he could see that some would want this auto grouping option, but the majority wouldn't. Now what does that tell us?
        It tell us that it's _either_:
        A/ Not the best solution (i.e. our scheduler sux)
        B/ Grouping should be smarter or more easily configurable (it's currently configurable in the previous kernel version and can do just what this kernel patch does!)

        • Re: (Score:3, Insightful)

          by Enderandrew ( 866215 )

          I think part of the reason that Linus is more accepting of this change rather than replacing the entire scheduler (like Con Kolivas pushed for) is that Linus likes small, neat patches. And I think Linus gets offended when someone wants to rip out large sections of the kernel and replace them.

          I often wonder how much old, legacy code there is in the kernel that is just overlooked. Anytime you carry code for that many years, you're bound to have some legacy systems that can due to be replaced.

          • by GooberToo ( 74388 ) on Tuesday November 16, 2010 @01:17PM (#34244030)

            Its been fairly well documented as to what happened. Linus is an egotist; as are many smart people, including others on the LKML. It seems Con has a fair ego himself with somewhat gruff interpersonal skills. The combination put him at odds with a lot of people. Generally speaking, people with large egos, don't like others who know more than they do, especially when its their pet project. Furthermore, Linus already had some people he trusted, who also had their ego's hurt by Con's work.

            So the line was drawn. Linus and his trusted lieutenants on one side and Con on the other. Linus with his lieutenants, all with hurt egos from a slightly abrasive developer who clearly understood the problem domain better than all of them, simply decided he preferred his pride and undue ego over that of Con's potential contributions.

            Not surprisingly, this type stuff happens ALL THE TIME amongst developers. I can't tell you how many times I've seen personal pride and ego take priority over extremely well documented and well understood solutions. Not to mention, frequently, the same documentation which explains why its a good decision, also explains why the decision of pride is an incredibly bad decision; with a long history of being very dumb. Despite that, egos frequently rule the day. So its hardly surprising that Linus isn't immune.

            • by LingNoi ( 1066278 ) on Wednesday November 17, 2010 @04:03AM (#34252158)

              It's been fairly well documented but you still seem to ignore the reality of what happened.

              http://kerneltrap.org/node/14008 [kerneltrap.org]

              Read all that then tell me that Linus has an ego here. It seems to me that Linus is the only level headed guy and you're just trying to distort what really happened.

              - He choose CFS over SD because SD behaviour was inconstant among users' computers
              - Con would argue with people sending him problems rather then accept them as problems with his code
              - Linus didn't want code in the kernel that would only work well for certain users
              - Linus didn't want code maintained by someone that was so hostile to others' critique
              - Linus states that he believes the desktop is an important platform

  • by Anonymous Coward

    Is this a typo?

    "... slowdown compared to when this patch was applied." - Shouldn't that be something like "... slowdown compared to the performance before the patch was applied"

  • by MetalliQaZ ( 539913 ) on Tuesday November 16, 2010 @10:03AM (#34241276)
  • Isn't it awesome (Score:5, Insightful)

    by bcmm ( 768152 ) on Tuesday November 16, 2010 @10:10AM (#34241344)
    Isn't it awesome when a new version of your OS performs *better* than the last one on the same hardware?
  • by dselic ( 134615 ) on Tuesday November 16, 2010 @10:14AM (#34241396)

    No matter how many different flavors of Linux I installed, it just never seemed as snappy as Windows. There was always a sluggishness about it, nothing I could really put my finger on, but it was definitely there and it bothered me. I'm very glad to hear that a solution is in sight.

    I hope the people at Ubunto get this out as an update as soon as possible.

    • Re: (Score:3, Informative)


      And it'll probably be in Natty Narwhal, which is to be released in April 2011. https://wiki.ubuntu.com/KernelTeam/Specs/KernelNattyVersionAndFlavours [ubuntu.com]

    • Re: (Score:3, Interesting)

      by Jorl17 ( 1716772 )
      As strange as it may seem, I get that feeling when I touch a *used* Windows machine. No matter who used said machine, it eventually starts slowing down to a crawl. When I came to Linux, I came because I wanted a programmer-friendly environment (The Way It's Meant To Be) and I liked Compiz-Fusion (go figure!). Now I run GNU/Linux because it is GNU/LInux -- Free and Open-Source --, with openbox, fbpanel, pcmanfm, gnome-terminal, evince, chromium-browser, LibreOffice, amarok1.4, gedit, vim and some games (some
    • Re: (Score:3, Informative)

      by wiredlogic ( 135348 )

      If you cut back on the eye candy most distros have in their default desktop environments you'll wipe away any sluggishness. Switching to a simple window manager that doesn't use pixmaps for everything will significantly improve X's performance.

    • Re: (Score:3, Interesting)

      by downhole ( 831621 )

      I probably haven't tried nearly as many flavors of Linux or Windows as you, but my experience has been the complete opposite. My Ubuntu install, running Gnome on a pretty average system (3.1ghz dual core AMD CPU, 4GB RAM, integrated video), feels lightning-fast and responsive compared to pretty much every other system I've ran so far. Especially the Windows systems I use at work, which admittedly are all XP systems infected with McAfee. I've always been kind of a junkie for really responsive UIs - it drives

  • backport? (Score:3, Insightful)

    by real_smiff ( 611054 ) on Tuesday November 16, 2010 @10:18AM (#34241416)
    any chance of backport of this e.g. to .35 kernel for use in ubuntu maverick? i could use that :) or even to .32 for use in 10.04 LTS..
  • by 91degrees ( 207121 ) on Tuesday November 16, 2010 @10:19AM (#34241426) Journal
    Typically when you get this sort of speedup, it's by rewriting a tiny piece of code that gets called a lot. Sometimes you can get this sort of thing from a single variable, or for doing something odd like making a variable static.
  • by ArsenneLupin ( 766289 ) on Tuesday November 16, 2010 @10:19AM (#34241428)

    The patch being talked about is designed to automatically create task groups per TTY in an effort to improve the desktop interactivity under system strain.

    I guess this works because the task loading up the machine will have been launched from a konsole, and thus be tied to a specific tty (the make -j64 example given later), so a tty-based scheduler can appropriately downgrade it.

    But what if the "loading" task has been launched directly from the GUI (such as dvdstyler)? It won't have a tty attached to it, and will be indistinguishable from all the other tty-less tasks launched from the GUI (such as the firefox where you browse your webmail), or worse, Firefox itself creating the load (such as happens when you've got too many Facebook or Slashdot tabs open at once...)

  • by Anonymous Coward on Tuesday November 16, 2010 @10:20AM (#34241444)

    This has been brought up by others on slashdot before, but Linux tends to be either (A) fine and happy, or (B) pushed into a thrashing state from which it can never recover - like it takes 8 or 10 minutes to move the mouse cursor across the screen. Since there is relatively no warning before this happens, it makes a hard reboot just about the only option.

    Will this patch help with that issue? Like the threads below say, once a modern (KDE/Gnome) desktop Linux starts swapping, there is so much new data produced per second by the GUI that it's basically game over. I'd like to see a fix for this: it's the single biggest cause on Linux that makes me do a hard reboot. I just don't have the patients to see if the thing will recover in half an hour or so, even though it might.

    http://ask.slashdot.org/comments.pl?sid=1836202&cid=33998198 [slashdot.org]
    http://ask.slashdot.org/comments.pl?sid=1836202&cid=33999108 [slashdot.org]
    http://www.linuxquestions.org/questions/linux-kernel-70/swap-thrashing-can-nothing-be-done-612945/ [linuxquestions.org]

    • Re: (Score:3, Interesting)

      by real_smiff ( 611054 )
      ah i see this regularly on my netbook with 512MB RAM and a slow SSD; just by having too many firefox tabs open. didn't realise it was a known problem. yes very annoying indeed. thanks for bringing it up...
    • or, plan B: (Score:4, Interesting)

      by ChipMonk ( 711367 ) on Tuesday November 16, 2010 @10:58AM (#34241878) Journal
      Turn off swap, if you can. The cost of memory is now less than the cost of the stress and lost uptime due to swap-paralysis.

      I have 4G of RAM on my desktop (I doubled the RAM for $60 after I bought it) and the only time my system swaps is when I have mmap()'ed an 8G file.

      Similarly, on my 512M netbook, I don't exceed RAM with crazy things like "make -j64 bzImage". Even with wear-leveling, swapping to SSD is bad form. I'd rather swap over NFS to spinning platters, than to SSD.
      • Re: (Score:3, Informative)

        by mcelrath ( 8027 )

        I believe the parent is talking about the iowait bug #12309 [kernel.org], which, maddeningly, has nothing to do with swap, or filesystem type. You can turn off the swap entirely and still trigger the bug. Of course there are use cases where heavy swapping brings the system down, so there is a perceived improvement by most people when turning off the swap.

      • Re:or, plan B: (Score:5, Informative)

        by jones_supa ( 887896 ) on Tuesday November 16, 2010 @05:47PM (#34248490)

        Turn off swap, if you can. The cost of memory is now less than the cost of the stress and lost uptime due to swap-paralysis.

        Actually even with no swap you will jam Linux when you run out of memory. Things like system libraries get thrown out of memory cache, but are soon needed again and read from the disk. This kind of circus can go on for half a hour until the actual OOM killer gets into the game.

        • Re: (Score:3, Interesting)

          by mcelrath ( 8027 )

          That actually makes a lot of sense, but I've never heard this explanation before. It also seems relatively easy to deal with... e.g. keep a "reload count" and if things being flushed from the memory cache are immediately reloaded, invoke the OOM. Or, the VM system is swapping the wrong things out. Your explanation also makes perfect sense in that I've observed it has nothing to do with swap or filesystem type.

  • Please. (Score:3, Insightful)

    by drolli ( 522659 ) on Tuesday November 16, 2010 @10:20AM (#34241454) Journal

    When doing benchmarks, do them seriously.

    okok... i know its phoronix...

    A single *atypical* workload as a benchmark, without a full characterization does not make me consider to use a kernel path on my system which is reported in the style of an UFO-sighting....

    I wonder if nicing the kernel compilation would have had a similar effect....

    • Re:Please. (Score:4, Insightful)

      by onefriedrice ( 1171917 ) on Tuesday November 16, 2010 @11:19AM (#34242094)

      I wonder if nicing the kernel compilation would have had a similar effect....

      Probably, but that's not really the point. A user should rarely (if at all) have to use nice on a desktop, because a desktop operating system is supposed to be able to keep input latency low, always. That is the reason BeOS had such incredible perceived speed, but some "modern" operating systems are still struggling with this feat. I mean, it's 2010 and we've had 25+ years to work this out. Cursor stuttering and choppy video should have been a completely solved problem by now.

  • by Anonymous Coward on Tuesday November 16, 2010 @10:34AM (#34241574)

    ....would make it 2x faster! LOC is the #1 metric for programming.

"The number of Unix installations has grown to 10, with more expected." -- The Unix Programmer's Manual, 2nd Edition, June, 1972