5 Years of Linux Kernel Releases Benchmarked 52
An anonymous reader writes "Phoronix has published benchmarks of the past five years worth of Linux kernel releases, from the Linux 2.6.12 through Linux 2.6.37 (dev) releases. The results from these benchmarks of 26 versions show that, for the most part, new features haven't affected performance."
Windows Kernels (Score:5, Interesting)
What about running the same study on the Windows kernel from XP to 7?
Re: (Score:3, Insightful)
While interesting, it isn't exactly the same; in linux, you can actually just change the kernel, without changing all the services and starting software.
Re: (Score:1)
Re: (Score:1, Flamebait)
The results from these 26 pages of advertisements show that, for the most part, sensationalist bullshit and trolling is as profitable as ever.
Fixed that for them.
Virtual machine, really? (Score:5, Insightful)
They tested in a VM. Now where's the proof that by itself doesn't affect performance in an unpredictable way?
Re:Virtual machine, really? (Score:5, Informative)
Considering the efforts going into VM these days and the massive deployments in Fortune 500 companies, the performance of VM based systems is predictable. All the testing with Phoronix Test Suite is repeated until there is less than 3% variance between the results - or the result set is discarded.
Realistically, looking at older kernels on modern hardware is actually a very critical dimension for corporate server environments. There are applications in that space that are deployed and supported only on some old distribution. Being able to achieve and understanding how Red Hat 7.1 will act vs Red Hat 5 is critical for some environments.
Re: (Score:2)
the performance of VM based systems is predictable
I agree that benchmarking a single VM on a VM host is a valid thing to do, and will give fairly reproducible results. But it can get more difficult with more complex setups. You need to be able to manage the complexity and eliminate or randomise all the factors. Benchmarking a single VM running on a VM host with 20+ other active VMs, with snapshots being created and merged, and with variable network and disk configurations, gets more difficult.
All the testing with Phoronix Test Suite is repeated until there is less than 3% variance between the results - or the result set is discarded.
What is the minimum number of replicates for each setup? 3% vari
Re: (Score:1)
Until a year or two ago, I used to be an inveterate kernel stripper; any driver or service that wasn't used or supported by my hardware got ruthlessly taken out. This did leave me with more responsive machines at the minor cost of my time. More recently I
Re: (Score:2)
How do you know that running in a VM doesn't affect one kernel version more than another?
Being too lazy/stupid to start a machine on bare metal? Come the fuck on.
Of course, Phoronix being the vile pretend-useful bottom-feeding site that it is, they would never care about making sure there are no outside factors over generating page impressions quickly and cheaply.
Re: (Score:2)
How do you know that running on an AMD doesn't affect one kernel version more than another vs Intel. The same argument stands. It's a machine layer for running code.
Sure, it's not what you want, but don't consider it completely invalid. There are many people who have interest in virtualized performance.
Re: (Score:2)
> How do you know that running on an AMD doesn't affect one kernel version more than another vs Intel.
It does, at least if you compile for it.
> There are many people who have interest in virtualized performance.
I am amongst them. We run a few hundred VMs.
> Sure, it's not what you want, but don't consider it completely invalid.
Not completely invalid. Yet, a very basic mistake in benchmarking was made due to inability and/or laziness which could have a major impact on the validity.
We are used to this
Re: (Score:3, Interesting)
They tested in a VM. Now where's the proof that by itself doesn't affect performance in an unpredictable way?
If they test in a VM, on only one particular hardware configuration, then the results only apply to that specific test setup. If the fact that the experiments are run inside a VM introduces variability into the results, then this will show up as a large variance. [wikipedia.org] However, having a larger variance does not in itself negate the results - but remember that the results can't be generalised to other configurations - they only apply to this particular setup.
In order to produce experimental results that can be ge
Re: (Score:3, Interesting)
The "get to statistical variance" has been in Phoronix Test Suite for the better part of a year.
As part of the new work happening with Phoronix Test Suite, and the online aggregation site OpenBenchmarking.org, we'll be looking to expose the raw data and allow people to view a particular set of results in a possible more meaningful way. What is being examined now is raw data (scatter diagram), box plot (percentiles), violin plots (kernel function based), full standard error reporting (error bars, numerical
Re: (Score:2, Insightful)
They tested in a VM. Now where's the proof that by itself doesn't affect performance in an unpredictable way?
Does it matter?
They are after delta's not absolutes.
*IF* they test each kernel in the same VM on the same metal then any change is valid. The numbers are abstract, the difference between release is what is key
Re: (Score:1)
They tested in a VM. Now where's the proof that by itself doesn't affect performance in an unpredictable way?
The real problem with running this kind of comparative benchmark in a VM isn't even predictability. It's that virtualization affects kernel performance in many profound ways. Many performance metrics you might choose to test will depend on the host kernel and virtualization environment and how it interacts with the guest kernel. In other words, you're not testing the performance of the guest kernel in isolation.
For example, say you use a combination of host and guest which supports native IO (where the g
Re: (Score:3, Interesting)
In addition, a VM will use available assigned cores on the host, without locking them 1:1. This changes the behavior quite a bit, especially when it comes to CPU cache. The guest thinks it is running on the same core, but in reality it jumps between them, and has to reload from higher level cache or even memory.
Worse, from a benchmarking standpoint, hyperthreading will be exposed to the guest as separate CPUs. An intelligent scheduler would want to run distinct tasks on different cores, but can't do so i
Obviously... (Score:1)
Y'all musta forgot (Score:3, Funny)
Re: (Score:1, Redundant)
Re: (Score:1)
As mentioned in a different comment. The only place where you will find older kernels in production these days will be in a VM. These are completely relevant for the people who will be running older kernels. Old hardware dies and the services migrate. Old software doesn't die, it just keeps on living in a VM.
Re: (Score:2)
As mentioned in a different comment. The only place where you will find older kernels in production these days will be in a VM.
Not so. Red Hat Enterprise Linux and CentOS still ship with 2.6.18, for example.
And for embedded linux, there are products shipping with far older kernels than 2.6.17, the oldest kernel in this test.
In general, expect the "long term stable" kernels (like 2.6.18, 2.6.27 and 2.6.32) to be in production for a long time. When the life cycle of a product is 5+ years, having a stable kernel outweighs having a new one.
Re: (Score:2)
At the last companies I have worked at, the modern hardware doesn't run the old version of Red Hat. The result is that they run RH or other legacy versions of Linux are easier to deal with a consistent and simple hardware abstraction provided by a VM. I haven't seen a bare metal deployment of anything older RHEL 5 for a while (either that or the system it's running on is on life support).
I agree about embedded devices having older kernels - I'm regularly involved in "shiny and new" vs "old and known" disc
Re: (Score:2)
I wasn't talking about old versions of Red Hat Enterprise Linux. The newest version, 5.6, uses kernel 2.6.18.
Results don't support conclusion (Score:3, Interesting)
It seems almost every benchmark that had any difference was slower in more modern kernels. It's not all sunshine and roses.
Re: (Score:1)
Re: (Score:2)
Yeah, note that some of the benchmarks are measuring bytes/sec so higher is better. :)
Re:Results don't support conclusion (Score:5, Informative)
Better
Worse
Same
Re: (Score:3, Interesting)
Not only that, but they only looked at the kernel with a specific version of GCC. Due to this, the performance differences could theoretically be not only accounted for by minute differences in how the compiler handles things.
The bigger thing with Linux performance isn't just the kernel - it's the entire stack. You've got the kernel, sure - and then you've got the core libraries (glibc, etc.) and the compiler which built them. These all can change performance significantly, and in real-world environments, t
wops (Score:2)
It seems that Phoronix needs a faster kernel on their server...
Seriously though, Some of the performance drops (and how they have been sustained in later kernel versions) makes me wonder if there is adequate load testing as part of the kernel QA process.
Re: (Score:2, Insightful)
Keep in mind that the biggest drop was most likely do to ext4 adding data journaling rather than the usual medtadata journaling to make file contents less likely to be corrupted after an unplanned shutdown(power outage etc)
I didn't see any mention of them turning that feature off to find out one way or another.
Re: (Score:3, Insightful)
Some off the changes noted in the Linux 2.6.30 kernel change-log that was used throughout the Linux testing process included...
Yeah, that new EXT4 filesystem that they didn't use for obvious reasons. Huge impact on the results.
Re: (Score:2)
change-log for the EXT3 file-system that was used throughout
Quote is available on the third page of the article, first paragraph.
Overkill (Score:4, Funny)
What more Linux benchmarking do you need besides bogomips? Jeez.
Re: (Score:1)
You call those kernel benchmarks? (Score:5, Insightful)
Where are the kernel-level tests that do more than exercise the filesystem and network driver (singular) and the scheduler? More than half of those charts were flat, which could mean they weren't making appropriate measurements.
For example, show how mutexes have improved, or copy-on-write, or interrupt handlers, or timers, or workqueues, or kmalloc, or anything else that a system and kernel programmer would care about. I like the user-centric perspective: it's very good information to have and share, but don't call what you've done a kernel benchmark. Maybe call it a kernel survey of its impact on users.
Re: (Score:1)
The only thing they changed was the kernel. Performance differences can only be due to changes in the kernel. In what way is that not a kernel benchmark?
Re: (Score:2)
It's not a COMPLETE kernel benchmark in that it only exercises certain parts of the kernel.
And since you obviously needed a car analogy: It's still like ONLY testing how fast a car goes 0 to 60 miles per hour, but not the towing capacity, fuel efficiency, braking distance, or crash performance, and a bunch of other things.
Re: (Score:2)
> Performance differences can only be due to changes in ... or to the VM having better support for certain features used in that particular kernel version, or that particular VM being configured in such a way that some kernel run better than others, or the host kernel somehow having better support for some features of the VM and benchmarked kernel, or...
> the kernel.
Which is perfectly fine as long as it's made very clear that the benchmarks are subject to all of those conditions. Personally, I think t
Re: (Score:3, Insightful)
IF you were running the tests on real hardware, I'd be more likely to agree.
They weren't. They were running it on a virtualized host in KVM. This means that not only were their results largely determined by the specific network, etc. drivers they used (which can see significant revision between kernels and not accurately reflect the kernel itself), but any idiosyncratic behavior in KVM in how it treats guest interfaces may account for the discrepancies.
ugh (Score:5, Informative)
I love that Phoronix is willing to take the time to run tests like this. I just wish they'd learn how to run meaningful tests. For instance, why are they testing a bunch of CPU-bound things? Kernel won't affect that unless we're talking about SMP performance. If you want to test the kernel, test how well it handles SMP, network I/O and disk I/O. And bear in mind that disk I/O will be hugely affected by which filesystem is used and its configurable settings.
Another problem with their article is that it tests individual kernels. Most folks don't use a vanilla kernel. They use one provided by their distro, which may have distro-specific patches that address some of the performance problems (or add new ones). What I would have preferred to see is a comparison of different distro releases over the last 5 years, focusing on the most popular ones (say Ubuntu, Fedora and SuSE).
The meaningful tests (and their results) were:
1. GnuPG: avoid 2.6.30 and later.
2. Loopback TCP: avoid 2.6.30 and later.
3. Apache Compilation: avoid 2.6.29 and earlier.
4. Apache static content: avoid 2.6.12, 2.6.25, 2.6.26, then 2.6.30 and later.
5. PostMark: avoid 2.6.29 and earlier.
6. FS-Mark: avoid 2.6.17 and earlier, 2.6.29, then 2.6.33 to 2.6.36.
7. ioZone: unless you're willing to run 2.6.21 or earlier, avoid 2.6.29 and you're fine.
8. Threaded I/O: avoid 2.6.20 and earlier, 2.6.29, then 2.6.33 to 2.6.36.
Based on these results, #1 and #2 seem to be testing the same thing, and tests #3 and #5 seem to be testing the inverse of whatever that thing is. 2.6.29 seems to be especially crappy, performing worse than the kernels immediately before and immediately after it on tests #6, #7 and #8. In terms of recent kernels, tests #6 and #8 suggest a regression in 2.6.33 that has been resolved in 2.6.37.
If it were me, I'd look at either running 2.6.37 (when its released) or fall back to 2.6.32 if my hardware was supported.
Re:ugh (Score:4, Insightful)
This made me laugh - in a good way, not at you :).
When Phoronix does a distro-comparison the crowd calls out that the tests are only really testing gcc differences, and should have less variables changing. When Phoronix does a fixed comparison varying only one part of the system, the crowd calls out that it isn't a good basis since people don't run it that way.
Phoronix runs tests in different ways to explore the performance landscape. For some it precisely gives the information that they need, for other it's completely irrelevant. In this particular case, I'm glad that the data gave you enough to have some open questions about 2.6.32 vs 2.6.37. If people walk away with those sorts of first order interpretation, the article served it's purpose.
Of course the next step would be how do we take a tighter look at the delta between 2.6.32 and 2.6.37 - any thoughts?
Regarding meaningful vs meaningless tests. The tests Phoronix runs are a collection of tests to explore. The tests were run, and for some of them, the results yielded nothing interesting but were still reported. You don't know until you run the tests, and if the tests are run, you report on them. Some tests may be stable now, but may have sensitivity to other parts of the systems. Even CPU bound tests will yield different results in different cases (scheduler, etc).
Re: (Score:3, Insightful)
e.g. multiple processes in various scenarios:
CPU intensive.
disk IO intensive.
network IO intensive, single NIC.
network IO intensive, two NICs.
network IO intensive, four NICs.
And various combinations of CPU, disk, network.
Then latency tests:
One to X processes with high CPU, while measuring latency experienced by another process.
One to X processes with high IO, while measuring latency experienced by a
Phoronix, really? (Score:1)
What's next, we all believe Eugenia from OSNews when she spews about BeOS? These guys are just page-view leeches, ignore them and they'll wither and die.
CPU-bound no better, disk & network worse (Score:1)
This comes as no surprise. In any activity which is mostly limited by CPU in user mode, not much changes, you can track that over a number of operating systems. What has gotten slower is disk io and network transfer time, and some tests, such as web serving, may be using all or mostly pages in memory, so this is not as obvious as it might be.
In addition, the test was run in a virtual machine, so to some extent the huge host memory provided more resources, and the very fast disk hides poor choices in the io