Benchmarking Linux Filesystems Part II 255
Anonymous Coward writes "Linux Gazette has a new filesystem benchmarking article, this time using the 2.6 kernel and showing ReiserFS v4. The second round of benchmarks include both the metrics from the first filesystem benchmark and the second in two matrices." From the article: "Instead of a Western Digital 250GB and Promise ATA/100 controller, I am now using a Seagate 400GB and Maxtor ATA/133 Promise controller. The physical machine remains the same, there is an additional 664MB of swap and I am now running Debian Etch. In the previous article, I was running Slackware 9.1 with custom compiled filesystem utilities. I've added a small section in the beginning that shows the filesystem creation and mount time, I've also added a graph showing these new benchmarks." We reported on the original benchmarks in the first half of last year.
I would agree (Score:2, Informative)
Warning (Score:3, Informative)
Re:Hardware mismatch (Score:3, Informative)
From TFA: ReiserFS takes a VERY long time to mount the filesystem. I included this test because I found it actually takes minutes to hours mounting a ReiserFS filesystem on a large RAID volume.
Looks like this guy makes a habit out of using systems with 500MHz CPUs... my dual 3GHz xeon box mounts a 1.2TB raid5 array formatted with ReiserFS in about 33 seconds, give or take a couple seconds.
no reason to switch (Score:1, Informative)
Re:SATA? (Score:3, Informative)
There are patches for libATA that enable NCQ, but they're not in the mainline yet.
The only thing worse than testing without the new technologies would be testing with half-baked implementations of them. Let's wait until NCQ is done before we try testing with it.
Normalized results (Score:4, Informative)
JFS won
EXT2 and EXT3 took 17% longer than JFS
XFS took 29% longer than JFS
Reiser3 took 38% longer than JFS
Reiser4 took 52% longer than JFS
Now, 1.52 seconds is not a whole lot longer to wait than 1 second. With any luck we'll see a post from Hans explaining why Reiser4 took longer, or what sacrifices were made to make the others faster, if there are any.
Outdated hardware... (Score:3, Informative)
In my daily work I manage hundreds of GB's of data and have hardly seen a significative difference between XFS, JFS and ReiserFS v.3 on relatively modern hardware (Tyan S2882 Pro motherboard, two Opteron 244 processors, 4 GB RAM and two 250-GB SATA HD's) running OpenSuSE 10. I put the most important data on a XFS partition but also have a small ReiserFS partition which can be read from Windows.
-- Help us to save our cousins the great apes, do not use cell phones.
Bad graphs to prove a point (Score:2, Informative)
It starts at 345GB and goes to 375GB on the y scale. This makes the difference between 355 and 370 look like a 50% difference rather than that 5.7% increase.
He does it again in make 10,000 directories 99.5% is not double the cpu use of 97%
Re:IDE Drives Cause other Overheads (Score:3, Informative)
No. A single transaction comes from a single thread. So the IO scheduler has no freedom here. It consists of these operations:
1. write redo log
2. write
3. clear redo log
They must occur in exactly this order. There are flush operations involved as well but I am not an expert here.
Poor benchmark writeup. MS Excel graphs? (Score:2, Informative)
You would have thought the author would use something better like gnuplot?
The author's opinion "Personally, I still choose XFS for filesystem performance and scalability."
is largely irrelevant here and sounds like bias, although the author acknowledges this.
There is no discussion of the results. The text between the graphs only mentions superficially
what is obvious to anyone looking at the graphs.
Seems a far cry from the very nicely done BSD and Linux benchmark at http://bulk.fefe.de/scalability/ [bulk.fefe.de]
Re:I think trying on a P2 266 is a bad idea (Score:3, Informative)
Re:Need to be careful... (Score:3, Informative)
As I recall, the default on xfs for irix was to reserve the top 10% for root only.
Re:Here's what's missing (Score:3, Informative)
Since the benchmarks presented are so rudimentary anyway, this is maybe not the first thing to worry about.
Speed is the most unimportant aspect... (Score:1, Informative)
First off:
People pissing and moaning that it's a old 500mhz machine should realise that very rarely in real world your cpu is completely dedicated to managing files.
That is when your running other proccesses it's likely that you'd only have '500mhz' worth of performance for a single proccess's file system accees left after the kernel is finished scedualing all the other proccesses and thread.
So stop bitching. It's a good as any benchmark.
Secondly if you value your data at all you should be running Ext3.
It's not so much that XFS sucks or JFS sucks.. it's that YOUR HARDWARE SUCKS.
Yes. That PC sitting in front of you with the nice big SATA drive and nforce chipsets is a hunk of shit. All PCs are like that and it's a reality you have to live with with PC-class hardware. Even on the server.
It maybe fast. But fast ain't everything.
With XFS and JFS they are designed for a different class of hardware.
These are machines that are built specificly for a task and the operating system modified/designed to suite that specific hardware and that specific task. These have nice big harddrive caches with battery backups (just for the harddrive's cache), they have big capaciters in the power supplies with all sorts of redundant ways to monitor different aspects of hardware. They are designed to be used with nice UPS and redundant power supplies.
If the power goes out to the machine and the UPS fails there is enough time on the machine to abort proccesses and make sure that the file system and data on the system is in a consistant state in the second it takes for the hardware to finally fail. And the hardware fails in specific orders to avoid data corruption.
This shit is expensive. This is the 'high end unix iron' stuff that people talk about. This isn't your Dell dual cpu crapbox with Windows 2003 thrown on it. The only way to get close to this level of reliability with PC hardware is to use Linux clustering with multiple multiple redudant redundancies and failover network file system support and such... and even then there is limitations.
This is what XFS and JFS is designed to do. Even low end versions of AIX and IRIX-using hardware had special features to assist the file system in protecting itself.
Ext3 on the other hand is designed specificly to work with your crappy PC hardware. That's it's purpose. That's what it is designed for and that is why 'enterprise' style Linux distros like Redhat use it almost exclusively, that's why they helped create it.
When your PC hardware looses power it craps out randomly. Your cpu could still send data to your harddrive while the delicate memory is busy flipping out and sending random garbage down all the channels on it's bus. There is no intellegent way for the OS to handle power failures and hardware failures because the hardware has no intellegent way to handle this stuff.
That's why Ext3 still has fsck. XFS, for instance, has the ability to journal your directory system.. but not data. Ever noticed that? Ext3 supports multiple journalling features including full data journalling.
That's also why ext3 is tied into linux clustering with things like Lustre and GFS.
That's why you, in my opinion, should use Ext3.
It may not be as cool as ReiserFS, but if your data matters then use Ext3 AND backups.