Forgot your password?
typodupeerror
Data Storage Software Linux IT

EXT4, Btrfs, NILFS2 Performance Compared 102

Posted by timothy
from the where-will-you-put-your-bits-next-year? dept.
An anonymous reader writes "Phoronix has published Linux filesystem benchmarks comparing XFS, EXT3, EXT4, Btrfs and NILFS2 filesystems. This is the first time that the new EXT4 and Btrfs and NILFS2 filesystems have been directly compared when it comes to their disk performance though the results may surprise. For the most part, EXT4 came out on top."
This discussion has been archived. No new comments can be posted.

EXT4, Btrfs, NILFS2 Performance Compared

Comments Filter:
  • by Anonymous Coward on Tuesday June 30, 2009 @12:10PM (#28529729)

    you folks are killing me

  • Btrfs (Score:5, Informative)

    by JohnFluxx (413620) on Tuesday June 30, 2009 @12:13PM (#28529793)

    The version of Btrfs that they used was before their performance optimizations - 0.18. But they now have 0.19 which is supposedly a lot faster and will be in the next kernel release. There's about 5 months of development work between them:

    # v0.19 Released (June 2009) For 2.6.31-rc
    # v0.18 Released (Jan 2009) For 2.6.29-rc2

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      a filesystem whose version begins with a zero doesn't get to be in the same room as my data, much less in charge of maintaining it

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Would it make you feel any better if the exact same code was labeled like this instead?
        # v1.9 Released (June 2009) For 2.6.31-rc
        # v1.8 Released (Jan 2009) For 2.6.29-rc2

        • Would it make you feel any better if the exact same code was labeled like this instead?

          Not much. Actually, I don't care much about version numbers, since there are lots of well-established products out there with version numbers
          What matters, though, is code maturity. For any general application, we can afford to put up with a few bugs here and there. A filesystem, however, needs to be proved to be safe, since errors can easily be found only after your last good copy of a file has disappeared out of the b
      • Re: (Score:3, Insightful)

        by hardburn (141468)

        A file system whose version begings with zero means the author's don't feel like putting a one there. Nothing more.

        That said, btrfs is still under heavy development, and the on-disk format hasn't been finalized. Avoid it for anything important, but not because of arbitrary version numbers.

        • Re: (Score:1, Informative)

          by Anonymous Coward

          bzzt

          Most schemes use a zero in the first sequence to designate alpha or beta status for releases that are not stable enough for general or practical deployment and are intended for testing or internal use only. Alpha- and beta-version software is often given numerical versions less than 1 (such as 0.9), to suggest their approach toward a public "1.0" release

          • Re: (Score:3, Informative)

            by hardburn (141468)

            Alpha- and beta-version software is often given numerical versions less than 1 (such as 0.9), to suggest their approach toward a public "1.0" release

            That's just your personal conception, conditioned by many years of commerical software development. Putting the '1.0' in is a totally arbitrary decision. Lots of Open Source projects are in perfectly stable, usable condition when in 0.x status. The Linux kernel itself was pretty stable in 0.9, with the only major changes between that and 1.0 being stabilizing the TCP/IP stack (IIRC).

            Some projects don't even use that nomenclature; Gentoo just uses the date of release. On the opposite side of the fence, lots

            • by AlXtreme (223728)

              Some projects don't even use that nomenclature; Gentoo just uses the date of release

              Maybe that's because version numbers really don't mean much when it comes to distributions. Fedora 10, Ubuntu 9.04 or Debian 3.0 are merely ways to distinguish different versions of a distribution. Because distro's are so complicated and contain so much software (even small ones) you can't be sure that 3.0 will even have the same stuff as 2.0, while with single applications you can be quite sure that you'll get a decent impr

            • Re:Btrfs (Score:4, Insightful)

              by hedwards (940851) on Tuesday June 30, 2009 @03:14PM (#28533205)
              What exactly it is that warrants an increment from 0.9.9 to 1.0.0 is going to vary somewhat, but in general there's supposed to be a few things in common amongst the releases.

              At 1.0 release it's supposed to be feature complete, free of show stopper bugs and reliable enough for regular use. Yes, there is some degree of legitimate disagreement as to exactly what that means, but not that much. It's a convention which people have largely agreed to because there needs to be some way of informing the user that this isn't quite ready for prime time. Adding features later on isn't an issue, but it does need to have all the features necessary to function properly.

              Then there's ZFS on FreeBSD which is experimental and will be experimental until there's enough people working on it for the dev to feel comfortable with things being fixed in a reasonable time.
            • by xtracto (837672)

              . Lots of Open Source projects are in perfectly stable, usable condition when in 0.x status.

              Not only that, lots of Open Source projects are unusable, unstable condition when in 4.X condition!

              Windows, for instance, was a sick joke in 1.0 and 2.0

              IMHO, windows had to go up to "0x58,0x50" to stop being a sick joke.

    • Btrfs tends to perform best at Bennigan's.

    • Talk about optimization or lack of it. Take a look at the SQL lite test. EXT3 is something like 80 times faster than EXT4 or BTRFS.

      What heck is going on!!!. Postgress SQL does not seem to show this performance enhancement.

      really this is an insanely different score, to the effect that if it's real no one in the right mind would run SQL on anything but EXT3.

      Something must be wrong with this test.

      • by goombah99 (560566)

        Same sort of weirdness shows up in the Mac 10.5.5 versus Ubuntu tests [phoronix.com]. all the test fluctuate a small amount except for the SQL-lite test in which the Mac creams ubuntu.

        why does SQL lite show such extreme behaviour in file systems.

        • Re: (Score:3, Interesting)

          by liquidpele (663430)
          If I had to guess, I would say it was the way the FS driver was caching pages, and it happened to be very good at guessing what was going to be needed next from SQL Lite. Then again... The way they were storing and retrieving in SQL Lite may have a large impact on the results in that case.
        • Re: (Score:3, Insightful)

          by setagllib (753300)

          fsync()

    • by fatp (1171151)
      Then 0.19 is not actually released (no one use rc kernel, right?). We can only say it was not born in the right time.

      BTW, since btrfs came from oracle, and it performs so poorly with sqlite and postgresql, I would be interested its performance with Oracle's own databases... oracle, Berkeley db, mysql... It would be interesting to see it runs well with Oracle RDBMS, but funny if it takes months to create the database (unitl 0.20 is out??)
  • by chrylis (262281)

    Kinda disappointed the article didn't discuss JFS. After running into the fragility of XFS, I tried it out, and it's highly robust, fast, and easy on the CPU.

    • by Zygfryd (856098)

      Phoronix benchmarked JFS before:
      * on a cheap SSD here: http://www.phoronix.com/scan.php?page=article&item=ubuntu_ext4&num=4 [phoronix.com]
      * on an expensive SSD here: http://www.phoronix.com/scan.php?page=article&item=intel_x25e_filesystems&num=1 [phoronix.com]
      The results were less than impressive, but they could be different in a HDD benchmark.

    • Re: (Score:3, Interesting)

      JFS has been in "bugfix mode" for some time.

    • by JSBiff (87824) on Tuesday June 30, 2009 @02:10PM (#28532191) Journal

      Ok, I've been wondering this for a long time. IBM contributed JFS to Linux years ago, but no one ever seems to give it a thought as to using it. I used it on my computer for awhile, and I can't say that I had any complaints (of course, one person's experience doesn't necessarily mean anything). When I looked into the technical features, it seemed to support lots of great things like journaling, Unicode filenames, large files, large volumes (although, granted, some of the newer filesystems *are* supporting larger files/volumes).

      Don't get me wrong - some of the newer filesystems (ZFS, Btrfs, NILFS2) do have interesting features that aren't in JFS, and which are great reasons to use the newer systems, but still, it always seems like JFS is left out in the cold. Are there technical reasons people have found it lacking or something? Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?

      • by piojo (995934)

        JFS has treated me very well for the last 2 years or so. It's fast when dealing with small files, unlike XFS. I've never noticed corrupted files after a hard boot, so I prefer it to EXT3. JFS also feels faster... of course, my perception isn't a benchmark.

        I would love to see the next generation of filesystems catch on, though. I would really like my data to be automatically checksummed on my file server.

      • by jabuzz (182671)

        Because as far as IBM are concerned JFS is not very interesting. I would point out the fact that the DMAPI implementation on JFS has bit rotted, and IBM don't even support HSM on it on Linux. For that you need to buy GPFS, which makes ZFS look completely ordinary.

      • by david.given (6740) <dg AT cowlark DOT com> on Tuesday June 30, 2009 @10:06PM (#28537725) Homepage Journal

        Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?

        I think because it's just not sexy.

        But, as you say, if you look into it it supports all the buzzwords. I use it for everything, and IME it's an excellent, lightweight, unobtrusive filesystem that gets the job done while staying out of my way (which is exactly what I want from a filesystem). It would be nice if it supported things like filesystem shrinking, which is very useful when rearranging partitions, and some of the new features like multiple roots in a single volume are really useful and I'd like JFS to support this, but I can live without them.

        JFS also has one really compelling feature for me: it's cheap. CPU-wise, that is. Every benchmark I've seen show that it's only a little slower than filesystems like XFS but it also uses way less CPU. (Plus it's much less code. Have you seen the size of XFS?) Given that I tend to use low-end machines, frequently embedded, this is good news for me. It's also good if you have lots of RAM --- an expensive filesystem is very noticeable if all your data is in cache and you're no longer I/O bound.

        I hope it sees more love in the future. I'd be gutted if it bit-rotted and got removed from the kernel.

    • by Wolfrider (856)

      Word - I use JFS for all my major filesystems, even USB/Firewire drives. Works very well with VMware, and has a very fast FSCK as well.

  • by mpapet (761907) on Tuesday June 30, 2009 @12:16PM (#28529855) Homepage

    All of the file systems are designed for specific tasks/circumstances. I'm too lazy to dig up what's special about each, but they are most useful in specific niches. Not that you _can't_ generalize, but calling ext4 the best of the bunch misses the whole point of the other file systems.

    • Shh. We want our choice of our default install to be the winner so we look like we are smarter then people who actually chose something else.

    • Could you elaborate what the niches are for each?

      Would it be technically possible to compare benchmarks with the Windows implementation of NTFS and FAT? Despite having a different underlying kernel?

  • by Ed Avis (5917) <ed@membled.com> on Tuesday June 30, 2009 @12:20PM (#28529923) Homepage

    The first benchmark on page 2 is 'Parallel BZIP2 Compression'. They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem? Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time. They then say which filesystems are fastest, but 'these margins were small'. Well, not really surprising. Are the results statistically significant or was it just luck? (They mention running the tests several times, but don't give variance etc.)

    All benchmarks are flawed, but I think these really could be improved. Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else - unless you believe in the possibility of some magic side-effect whereby the processor is slowed down because you're using a different filesystem. (It's just about possible, e.g. if the filesystem gobbles lots of memory and causes your machine to thrash, but in the real world it's a waste of time running these things.)

    • unless you believe in the possibility of some magic side-effect whereby the processor is slowed down because you're using a different filesystem. (It's just about possible, e.g. if the filesystem gobbles lots of memory and causes your machine to thrash, but in the real world it's a waste of time running these things.)

      Some filesystems have higher CPU usage - aside from issues of data structure complexity, btrfs does a load of extra checksumming, for instance.

      But your point stands that CPU-bound benchmarks are probably not the best way of measuring a filesystem. It would be interesting to measure CPU usage whilst running a filesystem-intensive workload, or even to measure this indirectly through the slowdown of bzip2 compression whilst running a filesystem-intensive workload in the background.

    • by js_sebastian (946118) on Tuesday June 30, 2009 @12:42PM (#28530393)

      The first benchmark on page 2 is 'Parallel BZIP2 Compression'. They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem? Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time. (...) Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else.

      That's one type of benchmark. But you also want a benchmark that shows the performance of CPU-intensive appliations while the file system is under heavy use. Why? because the filesystem code itself uses CPU, and you want to make sure it doesn't use too much of it.

      • by Ed Avis (5917)

        But you also want a benchmark that shows the performance of CPU-intensive appliations while the file system is under heavy use.

        You do want that, but I'm pretty sure that bzip2 isn't it. Compressing a file is actually pretty light work for the filesystem. You need to read some blocks sequentially, then write some blocks sequentially. Compressing lots of small files is better, but the access is still likely to be pretty one-at-a-time. More challenging would be a task that needs to read and write lots of f

    • by compro01 (777531)

      A processor-intensive test will show which filesystem has the most overhead WRT the processor. And as the test shows, they're all pretty much the same in that regard.

      • by Ed Avis (5917)

        A processor-intensive test will show which filesystem has the most overhead WRT the processor.

        Only if it's a filesystem-processor-intensive test, that is, you are making the filesystem work hard and (depending on how efficient it is) chew lots of CPU. Giving the filesystem easy work, while running something CPU-intensive like bzip2 separately, is a good benchmark for bzip2 but it doesn't tell you much about the fs.

    • by _32nHz (1572893)
      You need benchmarks to reflect your real world use. If you always run your benchmarks on idling systems then filesystems with on the fly compression would usually win. However they are not popular because this isn't a good trade off for most people. Parallel BZIP2 compression sounds a good choice as it should stress memory and CPU, whilst giving a common IO pattern, and a fairly low inherent performance variance. Obviously you are looking for a fairly small variance in performance, and the are a lot of ot
    • by ckaminski (82854)
      <quote>All benchmarks are flawed</quote>

      I'd argue that this is true only if they don't disclose their biases and limitations of testing methodology.
    • by MrKaos (858439)

      They then say which filesystems are fastest, but 'these margins were small'.

      They also said "All mount options and file-system settings were left at their defaults", and I struggled to see what the point is of doing performance tests to find the fastest file system if you are not going to even attempt to get the best performance you can out of each filesystem.

      Why not do a test that just uses dd to do a straight read from a target hard drive to a file(s) on the target filesystem to eliminate *any* variation

  • by clarkn0va (807617) <apt.get@gm a i l . com> on Tuesday June 30, 2009 @12:41PM (#28530379) Homepage
    Yeah, I know I'm behind the times, but when did striping become stripping?
  • by Lemming Mark (849014) on Tuesday June 30, 2009 @12:57PM (#28530731) Homepage

    NILFS2 (http://www.nilfs.org/en/) is actually a pretty interesting filesystem. It's a log-structured filesystem, meaning that it treats your disk as a big circular logging device.

    Log structured filesystems were originally developed by the research community (e.g. see the paper on Sprite LFS here, which is the first example that I'm aware of: http://www.citeulike.org/user/Wombat/article/208320 [citeulike.org]) to improve disk performance. The original assumption behind Sprite LFS was that you'll have lots of memory, so you'll be able to mostly service data reads from your cache rather than needing to go to disk; however, writes to files are still awkward as you typically need to seek around to the right locations on the disk. Sprite LFS took the approach of buffering writes in memory for a time and then squirting a big batch of them onto the disk sequentially at once, in the form of a "log" - doing a big sequential write of all the changes onto the same part of the disk maximised the available write bandwidth. This approach implies that data was not being altered in place, so it was also necessary to write - also into the log - new copies of the inodes whose contents were altered. The new inode would point to the original blocks for unmodified areas of the file and include pointers to the new blocks for any parts of the file that got altered. You can find out the most recent state of a file by finding the inode for that file that has most recently been written to the log.

    This design has a load of nice properties, such as:
    * You get good write bandwidth, even when modifying small files, since you don't have to keep seeking the disk head to make in-place changes.
    * The filesystem doesn't need a lengthy fsck to recover from crash (although it's not "journaled" like other filesystems, effectively the whole filesystem *is* one big journal and that gives you similar properties)
    * Because you're not repeatedly modifying the same bit of disk it could potentially perform better and cause less wear on an appropriately-chosen flash device (don't know how much it helps on an SSD that's doing its own block remapping / wear levelling...). One of the existing flash filesystems for Linux (JFFS2, I *think*) is log structured.

    In the case of NILFS2 they've exploited the fact that inodes are rewritten when their contents are modified to give you historical snapshots that should be essentially "free" as part of the filesystem's normal operation. They have the filesystem frequently make automatic checkpoints of the entire filesystem's state. These will normally be deleted after a time but you have the option of making any of them permanent. Obviously if you just keep logging all changes to a disk it'll get filled up, so there's typically a garbage collector daemon of some kind that "repacks" old data, deletes stuff that's no longer needed, frees disk space and potentially optimises file layout. This is necessary for long term operation of a log structured filesystem, though not necessary if running read-only.

    Another modern log structured FS is DragonflyBSD's HAMMER (http://www.dragonflybsd.org/hammer/), which is being ported to Linux as a SoC project, I think (http://hammerfs-ftw.blogspot.com/)

    • by jabuzz (182671)

      This is all well and good, but how about having some real features

      * Robust bullet proof quota system
      * Directory quotas
      * Shrinkable online
      * Clusterable
      * DMAPI for HSM with a working implementation.
      * Storage pool migration so I can mix SATA and SAS/FC in the same file system and do something useful with it.
      * Ability to continue functioning when one or more disks is "gone" temporarily or permanently from th

  • Dubious (Score:5, Insightful)

    by grotgrot (451123) on Tuesday June 30, 2009 @01:21PM (#28531315)

    I suspect their test methodology isn't very good, in particular the SQLite tests. SQLite performance is largely based on when commits happen as at that point fsync is called at least twice and sometimes more (the database, journals and containing directory need to be consistent). The disk has to rotate to the relevant point and write outstanding data to the platters before returning. This takes a considerable amount of time relative to normal disk writing which is cached and write behind. If you don't use the same partition for testing then the differing amount of sectors per physical track will affect performance. Similarly a drive that lies about data being on the platters will seem to be faster, but is not safe should there be a power failure or similar abrupt stop.

    Someone did file a ticket [sqlite.org] at SQLite but from the comments in there you can see that what Phoronix did is not reproducible.

    • by chrb (1083577)

      Here's a post [slashdot.org] linking to some other posts discussing some problems with the Phoronix benchmarking methodology. The same issues seem to be pointed out every time they get a benchmark article published on Slashdot.

  • So what - when was still using Linux a working backup (incl. ACL, Xattib etc. pp) was the most important criteria and XFS came up on top. xfsdump / xfsrestore has save the day more then once.

  • by Ant P. (974313) on Tuesday June 30, 2009 @01:35PM (#28531587) Homepage

    Skip TFA - the conclusion is that these benchmarks are invalid.

    At least they've improved since last time - they no longer benchmark filesystems using a Quake 3 timedemo.

    • Re: (Score:2, Interesting)

      by lbbros (900904)
      Not wanting to troll, just asking a honest question: why are they invalid? (No, I haven't RTFA)
      • by Ant P. (974313)

        Using an outdated version of Btrfs with known performance issues, using different settings for ext3 and ext4. Those are the ones that stand out, but the people in their forums do a good job of ripping apart nearly every benchmark they do.

  • Personally I'm holding out for the initial release of the MILFS2 filesystem. XD
  • It doesn't matter how fast it is, if it isn't correct! We as IT professionals should focus more on CORRECTNESS of the terabyes of data we store not how many IO/s as long as it does the job we need. Ensuring correctness should be job #1. Right now in production for me safe means ZFS. When Linux delivers a comparable stable tested filesystem I'll be all over it. Right now it still seems like the 1980's where 99% of people are obsessed over how FAST they can make things. I cringe every time I watch an ad
  • by Otterley (29945) on Tuesday June 30, 2009 @08:22PM (#28536921)

    Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB). Are they testing the filesystems or the buffer cache? I don't see any indication that any of these filesystems are mounted with the "sync" flag.

    • Re: (Score:2, Insightful)

      by wazoox (1129681)

      Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB). Are they testing the filesystems or the buffer cache? I don't see any indication that any of these filesystems are mounted with the "sync" flag.

      Yup, obviously they're mounting all filesystems with default settings, which can clearly be misleading. Furthermore, testing on a single 250 GB SATA drive maybe isn't that meaningful. What they're benchmarking is desktop performance, for obviously server oriented FS like XFS, BTRFS and NILFS that simply doesn't make sense.

  • At least according to some rough microbenchmarking I've done myself [luaforge.net]. My workload is to write raw CSV to disk as fast as possible. In testing, NILFS2 was nearly 20% faster than ext3 on a spinning disk.

    It was also smoother. Under very heavy load ext3 seemingly batched up writes then flushed them all at once, causing my server process to drop from 99% to 70% utilisation. NILFS seemed to consume a roughly constant percentage of CPU the whole time, which is much more in line with what I want.

    NILFS2 is not for ev

  • As far as I can see from the comparison of these FSes, BTRFS is a promising file system for Linux and is under development. Some say that it will be the ZFS of Linux or even better. I think time will say.
    Some other say [storagemojo.com], now that Oracle owns Sun, Oracle can change the license of ZFS from CDDL [sun.com] to GPL2 [gnu.org] and port to Linux. But porting ZFS to Linux it's another story [sun.com]...

"Turn on, tune up, rock out." -- Billy Gibbons

Working...