Linux 2.6.38 Released 159
darthcamaro writes "The new Linux 2.6.38 kernel is now out, and it's got a long list of performance improvements that should make Linux a whole lot faster. The kernel includes support for Transparent Huge Pages, Transmit Packet Steering (XPS), automatic process grouping, and a new RCU (Read/Copy/Update)-based path name lookup. '"This patch series was both controversial and experimental when it went in, but we're very hopeful of seeing speedups," James Bottomley, distinguished engineer at Novell said. "Just to set expectations correctly, the dcache/path lookup improvements really only impact workloads with large metadata modifications, so the big iron workloads (like databases) will likely see no change. However, stuff that critically involves metadata, like running a mail server (the postmark benchmark) should improve quite a bit."'"
Kernel Newbies link (Score:5, Informative)
Re:200-line patch (Score:5, Informative)
As someone who knows bugger all about Linux, can anyone confirm if that patch will have any kind of impact on Android Devices or is it the kind of thing only a desktop user will see a difference with?
The Android kernel and the Linux kernel are pretty much irreparably forked, after the Linux people (perhaps rightly, I don't know) refused to accept the Android patches back into the trunk over the wakelock controversy [slashdot.org]. Unfortunately, the rift there never healed and there was never any real resolution [lwn.net].
In order for this to apply to Android, Google would have to port the changes over.
Re:200-line patch (Score:5, Informative)
Isn't this the version that 200-line patch was slated for?
I'm pretty sure that's what "automatic process grouping" is.
Yup. Some links:
Re:200-line patch (Score:5, Informative)
the example was forking 20 compile processes. normally that's a big speedup because when one has to pend on some i/o, another can pick up and do some work on your overall compile. with this new scheduling instead of 20 new processes crowding the few existing processes into much less cpu, now the 20 processes only act like one new process which makes me wonder why you'd fork 20 processes any more, since they'll have only one process' share of the resource. might as well run them sequentially; it'll take almost exactly as long
Say you have regular desktop programs that take some small amount of CPU, and you want to be able to compile things a quickly as possible without making your music skip or your window manager get laggy. Before this you would have to guess at the right number of compile processes to run; too few and it takes longer and doesn't use all your CPU, too many and your desktop gets laggy. Now, the scheduler treats all of the compiler processes as a group, and lets your music player and window manager steal CPU cycles from them more easily -- so you can run more processes and keep the CPU busy, without worrying about your music skipping.
Re:200-line patch (Score:5, Informative)
i think that's the wonder of it
because i wonder what it will do, too
albeit, i haven't followed kernel fixes for years
i imagine someone's found a way to fake priority by treating a group of processes as one process when allocating cpu, because it solves one problem someone was having while causing someone else a problem
the example was forking 20 compile processes. normally that's a big speedup because when one has to pend on some i/o, another can pick up and do some work on your overall compile. with this new scheduling instead of 20 new processes crowding the few existing processes into much less cpu, now the 20 processes only act like one new process
which makes me wonder why you'd fork 20 processes any more, since they'll have only one process' share of the resource
That's not quite right.
Basically, there are lots of conditions that could cause any process to give up its time slice. Your network application may be waiting for packets to process. Your video player may have decoded all the compressed video it needs for the moment, etc. The idea here is that certain programs, even if they're not doing a whole lot of work at any given time, still need frequent service so they can keep doing what they need to do.
If your machine were running 3 processes (in separate groups) and you ran another 20 in a single group, those 20 processes wouldn't wind up limited to 25% of the CPU time. In all likelihood, they'd continue using the lion's share of the machine's resources until the job is done.
What this scheme does do is help out those other three processes: instead of getting 1 time slice each out of every 23 to see if they have work to do, they'll get one out of every four (via group scheduling). If they have a bunch of work to do, this means they'll effectively have higher priority than the individual processes in that big job. But if they're largely idle, the big job will be able to consume the left-over CPU time.
So it's not a perfect system, and it's not any kind of CPU quota system or QoS system, it doesn't really restrict what processes on the system can do. It's a hint for the scheduler, to try and give priority to processes that need it.
Re:200-line patch (Score:4, Informative)
(And if it's completely IO bound, there's never been any reason to fork it 20 ways.)
That depends on why it's IO bound. If you're saturating available bandwidth then yes, but for example if you're trying to crawl a bunch of really slow webservers on the far side of the internet (high round-trip time) then you'd really want to have several outstanding requests at any given time. Even if you're IO bound against local disk parallelism can sometimes help a little, since it gives the IO scheduler more to work with.
Good detailed summary (Score:4, Informative)
http://www.h-online.com/open/features/What-s-new-in-Linux-2-6-38-1205467.html [h-online.com]
Re:YOUR point's taken, but his? Come on... apk (Score:5, Informative)
Look at it a little closer.
In 2010 (last full year)
Windows 7 - 47 (87% patched)
Linux 2.6 - 47 (94% patched)
But look a little deeper and you find something more interesting
Remote vulnerabilities
Windows - 55%
Linux - 9%
Criticality
Windows - 6% not, 36% less, 17% moderately, 40% highly.
Linux - 47% not c, 49% Less, 4% Moderate
Impact for System Access
Windows - 47%
Linux - 1%
Not all bugs and vulnerabilities are made equal or are equally important. Every program no matter how good, will have bugs, and some bugs will be exploitable. Your comparison is also flawed as 2.6 is much older than windows 7. (By a factor of about 5) Your reasoning is further flawed as a list of of windows vulnerability doesn't include, word, above, acrobat, or IE exploits. which will also add a number of vulnerabilities to a home desktop windows system.