Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

Benchmarking Linux Filesystems Part II 255

Anonymous Coward writes "Linux Gazette has a new filesystem benchmarking article, this time using the 2.6 kernel and showing ReiserFS v4. The second round of benchmarks include both the metrics from the first filesystem benchmark and the second in two matrices." From the article: "Instead of a Western Digital 250GB and Promise ATA/100 controller, I am now using a Seagate 400GB and Maxtor ATA/133 Promise controller. The physical machine remains the same, there is an additional 664MB of swap and I am now running Debian Etch. In the previous article, I was running Slackware 9.1 with custom compiled filesystem utilities. I've added a small section in the beginning that shows the filesystem creation and mount time, I've also added a graph showing these new benchmarks." We reported on the original benchmarks in the first half of last year.
This discussion has been archived. No new comments can be posted.

Benchmarking Linux Filesystems Part II

Comments Filter:
  • I would agree (Score:2, Informative)

    by jd ( 1658 ) <imipak@ y a hoo.com> on Friday January 06, 2006 @01:56PM (#14410304) Homepage Journal
    From a brief examination of the benchmarks, I'd say the following would seem to hold up:


    • JFS: Great for software development, as it allows rapid file and directory reads, writes, creates and deletes
    • XFS: Seems to work best with much more stable content. Creating and mounting the partition is also fast, and the FS overhead seemed low. Should be good for static databases, particularly if you're going to use a network filing system to access the drive, say using a SAN.
    • Reiser4: Surprisingly, I didn't see Reiser4 really shine at a whole lot in the benchmarks. The massive mount time tells me it needs to be a local drive that only needs mounting the once. Just not sure what sort of data would be best on it.
    • Ext2/Ext3: Mediocre at almost everything. Distros like Fedora that mandate the initial install ONLY use Ext3 are being stupid. The best fall-back filing systems if you can't find anything better for what you want the partition to do, but should never be used in specialized contexts.

  • Warning (Score:3, Informative)

    by c0dedude ( 587568 ) on Friday January 06, 2006 @01:57PM (#14410315)
    Remember, fastest!=best. Some filesystems cannot shrink. Some cannot change size at all. If you're doing anything with LVM or RAID, generally ext3 is the way to go. If you're just formatting a disk and using it without anything on top of it, these FS's may be for you. Then again, ext3 looks damn good in the tests as stands. XFS looks like the clear loser.
  • Re:Hardware mismatch (Score:3, Informative)

    by Hextreme ( 900318 ) <aarontc AT aarontc DOT com> on Friday January 06, 2006 @02:00PM (#14410334) Homepage
    This was definitely an issue in testing here. The wide range of "winning" filesystems for the different tests clearly indicates the bottleneck is somewhere other than the disk. In most modern systems, this isn't an issue.

    From TFA: ReiserFS takes a VERY long time to mount the filesystem. I included this test because I found it actually takes minutes to hours mounting a ReiserFS filesystem on a large RAID volume.

    Looks like this guy makes a habit out of using systems with 500MHz CPUs... my dual 3GHz xeon box mounts a 1.2TB raid5 array formatted with ReiserFS in about 33 seconds, give or take a couple seconds.
  • no reason to switch (Score:1, Informative)

    by Anonymous Coward on Friday January 06, 2006 @02:01PM (#14410340)
    Actually, what I take from this is there's no need to switch from a safe, standard EXT3 FS which is the default of many distros.
  • Re:SATA? (Score:3, Informative)

    by MarcQuadra ( 129430 ) * on Friday January 06, 2006 @02:14PM (#14410434)
    IIRC NCQ isn't 100% fully-baked on Linux yet, so even NCQ-capable controllers and drives won't take advantage of it yet. I just upgraded my home file server with NCQ-capable gear and I don't think it's using it yet, even though I'm running the latest kernel.

    There are patches for libATA that enable NCQ, but they're not in the mainline yet.

    The only thing worse than testing without the new technologies would be testing with half-baked implementations of them. Let's wait until NCQ is done before we try testing with it.
  • Normalized results (Score:4, Informative)

    by dtfinch ( 661405 ) * on Friday January 06, 2006 @02:15PM (#14410441) Journal
    Based on the geometric mean of all the benchmark times for each filesystem, which effectively weights all benchmarks equally:
    JFS won
    EXT2 and EXT3 took 17% longer than JFS
    XFS took 29% longer than JFS
    Reiser3 took 38% longer than JFS
    Reiser4 took 52% longer than JFS

    Now, 1.52 seconds is not a whole lot longer to wait than 1 second. With any luck we'll see a post from Hans explaining why Reiser4 took longer, or what sacrifices were made to make the others faster, if there are any.
  • Outdated hardware... (Score:3, Informative)

    by tetabiate ( 55848 ) on Friday January 06, 2006 @02:39PM (#14410666)
    Anyway, how is the average user supposed to be concerned by these results?
    In my daily work I manage hundreds of GB's of data and have hardly seen a significative difference between XFS, JFS and ReiserFS v.3 on relatively modern hardware (Tyan S2882 Pro motherboard, two Opteron 244 processors, 4 GB RAM and two 250-GB SATA HD's) running OpenSuSE 10. I put the most important data on a XFS partition but also have a small ReiserFS partition which can be read from Windows.

    -- Help us to save our cousins the great apes, do not use cell phones.
  • by Anonymous Coward on Friday January 06, 2006 @02:49PM (#14410740)
    The total free space graph is poor statistical representation

    It starts at 345GB and goes to 375GB on the y scale. This makes the difference between 355 and 370 look like a 50% difference rather than that 5.7% increase.

    He does it again in make 10,000 directories 99.5% is not double the cpu use of 97%
  • by oglueck ( 235089 ) on Friday January 06, 2006 @02:50PM (#14410750) Homepage
    Wouldn't a single journaling filesystem transaction be considered three independant writes?

    No. A single transaction comes from a single thread. So the IO scheduler has no freedom here. It consists of these operations:

    1. write redo log
    2. write
    3. clear redo log

    They must occur in exactly this order. There are flush operations involved as well but I am not an expert here.
  • by Srdjant ( 650988 ) on Friday January 06, 2006 @03:04PM (#14410880)
    What's with the Microsoft Excel style graphs? They're not very precise or professional-looking.
    You would have thought the author would use something better like gnuplot?

    The author's opinion "Personally, I still choose XFS for filesystem performance and scalability."
    is largely irrelevant here and sounds like bias, although the author acknowledges this.

    There is no discussion of the results. The text between the graphs only mentions superficially
    what is obvious to anyone looking at the graphs.

    Seems a far cry from the very nicely done BSD and Linux benchmark at http://bulk.fefe.de/scalability/ [bulk.fefe.de]
  • by StarHeart ( 27290 ) * on Friday January 06, 2006 @03:19PM (#14411002)
    I am pretty sure that ext3 fixed that with htree indexing. Htree has been around for a while.
  • by flaming-opus ( 8186 ) on Friday January 06, 2006 @03:27PM (#14411073)
    except you don't want to do this. As disks approach full, the contigious stretches of free-space approach lenght zero, due to fragmentation. This is true on all filesystems. The result of this is that space allocation on a 98% full disk is much much slower than on a 2% full disk. With disks as cheap as they are, one shouldn't be sitting around with 95% full disks. If that's the case, there are work-flow/administration issues that need to be worked out, rather than unlocking that last little bit of space.

    As I recall, the default on xfs for irix was to reserve the top 10% for root only.
  • by flaming-opus ( 8186 ) on Friday January 06, 2006 @03:34PM (#14411118)
    You're absolutely correct, as free-space fragmentation can play a HUGE role in the speed of space allocation. Of course, this plays no role at all in stat, rename, remove, readdir, operations, or any reads or any writes to existing parts of files.

    Since the benchmarks presented are so rudimentary anyway, this is maybe not the first thing to worry about.
  • by Anonymous Coward on Friday January 06, 2006 @10:13PM (#14414452)
    Speed is the most unimportant aspect you can think of...

    First off:
    People pissing and moaning that it's a old 500mhz machine should realise that very rarely in real world your cpu is completely dedicated to managing files.

    That is when your running other proccesses it's likely that you'd only have '500mhz' worth of performance for a single proccess's file system accees left after the kernel is finished scedualing all the other proccesses and thread.

    So stop bitching. It's a good as any benchmark.

    Secondly if you value your data at all you should be running Ext3.

    It's not so much that XFS sucks or JFS sucks.. it's that YOUR HARDWARE SUCKS.

    Yes. That PC sitting in front of you with the nice big SATA drive and nforce chipsets is a hunk of shit. All PCs are like that and it's a reality you have to live with with PC-class hardware. Even on the server.

    It maybe fast. But fast ain't everything.

    With XFS and JFS they are designed for a different class of hardware.

    These are machines that are built specificly for a task and the operating system modified/designed to suite that specific hardware and that specific task. These have nice big harddrive caches with battery backups (just for the harddrive's cache), they have big capaciters in the power supplies with all sorts of redundant ways to monitor different aspects of hardware. They are designed to be used with nice UPS and redundant power supplies.

    If the power goes out to the machine and the UPS fails there is enough time on the machine to abort proccesses and make sure that the file system and data on the system is in a consistant state in the second it takes for the hardware to finally fail. And the hardware fails in specific orders to avoid data corruption.

    This shit is expensive. This is the 'high end unix iron' stuff that people talk about. This isn't your Dell dual cpu crapbox with Windows 2003 thrown on it. The only way to get close to this level of reliability with PC hardware is to use Linux clustering with multiple multiple redudant redundancies and failover network file system support and such... and even then there is limitations.

    This is what XFS and JFS is designed to do. Even low end versions of AIX and IRIX-using hardware had special features to assist the file system in protecting itself.

    Ext3 on the other hand is designed specificly to work with your crappy PC hardware. That's it's purpose. That's what it is designed for and that is why 'enterprise' style Linux distros like Redhat use it almost exclusively, that's why they helped create it.

    When your PC hardware looses power it craps out randomly. Your cpu could still send data to your harddrive while the delicate memory is busy flipping out and sending random garbage down all the channels on it's bus. There is no intellegent way for the OS to handle power failures and hardware failures because the hardware has no intellegent way to handle this stuff.

    That's why Ext3 still has fsck. XFS, for instance, has the ability to journal your directory system.. but not data. Ever noticed that? Ext3 supports multiple journalling features including full data journalling.

    That's also why ext3 is tied into linux clustering with things like Lustre and GFS.

    That's why you, in my opinion, should use Ext3.

    It may not be as cool as ReiserFS, but if your data matters then use Ext3 AND backups.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...