Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux IT

EXT4, Btrfs, NILFS2 Performance Compared 102

An anonymous reader writes "Phoronix has published Linux filesystem benchmarks comparing XFS, EXT3, EXT4, Btrfs and NILFS2 filesystems. This is the first time that the new EXT4 and Btrfs and NILFS2 filesystems have been directly compared when it comes to their disk performance though the results may surprise. For the most part, EXT4 came out on top."
This discussion has been archived. No new comments can be posted.

EXT4, Btrfs, NILFS2 Performance Compared

Comments Filter:
  • by Nakarti ( 572310 ) on Tuesday June 30, 2009 @12:16PM (#28529853)

    Saying a SATA drive is not an SSD is borderline stupidity, but who's to say that it really matters.
    Comparing filesystems under a certain condition is comparing filesystems.
    Comparing filesystems on different conditions is NOT comparing filesystems.

  • by mpapet ( 761907 ) on Tuesday June 30, 2009 @12:16PM (#28529855) Homepage

    All of the file systems are designed for specific tasks/circumstances. I'm too lazy to dig up what's special about each, but they are most useful in specific niches. Not that you _can't_ generalize, but calling ext4 the best of the bunch misses the whole point of the other file systems.

  • by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday June 30, 2009 @12:20PM (#28529923) Homepage

    The first benchmark on page 2 is 'Parallel BZIP2 Compression'. They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem? Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time. They then say which filesystems are fastest, but 'these margins were small'. Well, not really surprising. Are the results statistically significant or was it just luck? (They mention running the tests several times, but don't give variance etc.)

    All benchmarks are flawed, but I think these really could be improved. Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else - unless you believe in the possibility of some magic side-effect whereby the processor is slowed down because you're using a different filesystem. (It's just about possible, e.g. if the filesystem gobbles lots of memory and causes your machine to thrash, but in the real world it's a waste of time running these things.)

  • Re:Btrfs (Score:3, Insightful)

    by Anonymous Coward on Tuesday June 30, 2009 @12:23PM (#28529975)

    a filesystem whose version begins with a zero doesn't get to be in the same room as my data, much less in charge of maintaining it

  • Re:Btrfs (Score:2, Insightful)

    by Anonymous Coward on Tuesday June 30, 2009 @12:39PM (#28530303)

    Would it make you feel any better if the exact same code was labeled like this instead?
    # v1.9 Released (June 2009) For 2.6.31-rc
    # v1.8 Released (Jan 2009) For 2.6.29-rc2

  • Re:Btrfs (Score:3, Insightful)

    by hardburn ( 141468 ) <hardburn@wumpus-ca[ ]net ['ve.' in gap]> on Tuesday June 30, 2009 @12:41PM (#28530375)

    A file system whose version begings with zero means the author's don't feel like putting a one there. Nothing more.

    That said, btrfs is still under heavy development, and the on-disk format hasn't been finalized. Avoid it for anything important, but not because of arbitrary version numbers.

  • by js_sebastian ( 946118 ) on Tuesday June 30, 2009 @12:42PM (#28530393)

    The first benchmark on page 2 is 'Parallel BZIP2 Compression'. They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem? Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time. (...) Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else.

    That's one type of benchmark. But you also want a benchmark that shows the performance of CPU-intensive appliations while the file system is under heavy use. Why? because the filesystem code itself uses CPU, and you want to make sure it doesn't use too much of it.

  • Dubious (Score:5, Insightful)

    by grotgrot ( 451123 ) on Tuesday June 30, 2009 @01:21PM (#28531315)

    I suspect their test methodology isn't very good, in particular the SQLite tests. SQLite performance is largely based on when commits happen as at that point fsync is called at least twice and sometimes more (the database, journals and containing directory need to be consistent). The disk has to rotate to the relevant point and write outstanding data to the platters before returning. This takes a considerable amount of time relative to normal disk writing which is cached and write behind. If you don't use the same partition for testing then the differing amount of sectors per physical track will affect performance. Similarly a drive that lies about data being on the platters will seem to be faster, but is not safe should there be a power failure or similar abrupt stop.

    Someone did file a ticket [sqlite.org] at SQLite but from the comments in there you can see that what Phoronix did is not reproducible.

  • by JSBiff ( 87824 ) on Tuesday June 30, 2009 @02:10PM (#28532191) Journal

    Ok, I've been wondering this for a long time. IBM contributed JFS to Linux years ago, but no one ever seems to give it a thought as to using it. I used it on my computer for awhile, and I can't say that I had any complaints (of course, one person's experience doesn't necessarily mean anything). When I looked into the technical features, it seemed to support lots of great things like journaling, Unicode filenames, large files, large volumes (although, granted, some of the newer filesystems *are* supporting larger files/volumes).

    Don't get me wrong - some of the newer filesystems (ZFS, Btrfs, NILFS2) do have interesting features that aren't in JFS, and which are great reasons to use the newer systems, but still, it always seems like JFS is left out in the cold. Are there technical reasons people have found it lacking or something? Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?

  • Re:Btrfs (Score:4, Insightful)

    by hedwards ( 940851 ) on Tuesday June 30, 2009 @03:14PM (#28533205)
    What exactly it is that warrants an increment from 0.9.9 to 1.0.0 is going to vary somewhat, but in general there's supposed to be a few things in common amongst the releases.

    At 1.0 release it's supposed to be feature complete, free of show stopper bugs and reliable enough for regular use. Yes, there is some degree of legitimate disagreement as to exactly what that means, but not that much. It's a convention which people have largely agreed to because there needs to be some way of informing the user that this isn't quite ready for prime time. Adding features later on isn't an issue, but it does need to have all the features necessary to function properly.

    Then there's ZFS on FreeBSD which is experimental and will be experimental until there's enough people working on it for the dev to feel comfortable with things being fixed in a reasonable time.
  • by setagllib ( 753300 ) on Tuesday June 30, 2009 @08:05PM (#28536797)

    fsync()

  • by Otterley ( 29945 ) on Tuesday June 30, 2009 @08:22PM (#28536921)

    Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB). Are they testing the filesystems or the buffer cache? I don't see any indication that any of these filesystems are mounted with the "sync" flag.

  • by geniusj ( 140174 ) on Tuesday June 30, 2009 @09:12PM (#28537249) Homepage

    Though I never understood why one would choose to use an SSD on a SATA interface. Using a medium that support parallel access over a serial interface doesn't seem all that logical to me..

  • by wazoox ( 1129681 ) on Wednesday July 01, 2009 @09:19AM (#28541271) Homepage

    Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB). Are they testing the filesystems or the buffer cache? I don't see any indication that any of these filesystems are mounted with the "sync" flag.

    Yup, obviously they're mounting all filesystems with default settings, which can clearly be misleading. Furthermore, testing on a single 250 GB SATA drive maybe isn't that meaningful. What they're benchmarking is desktop performance, for obviously server oriented FS like XFS, BTRFS and NILFS that simply doesn't make sense.

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...