EXT4, Btrfs, NILFS2 Performance Compared 102
An anonymous reader writes "Phoronix has published Linux filesystem benchmarks comparing XFS, EXT3, EXT4, Btrfs and NILFS2 filesystems. This is the first time that the new EXT4 and Btrfs and NILFS2 filesystems have been directly compared when it comes to their disk performance though the results may surprise. For the most part, EXT4 came out on top."
What, no ReiserFS? (Score:5, Funny)
you folks are killing me
Btrfs (Score:5, Informative)
The version of Btrfs that they used was before their performance optimizations - 0.18. But they now have 0.19 which is supposedly a lot faster and will be in the next kernel release. There's about 5 months of development work between them:
# v0.19 Released (June 2009) For 2.6.31-rc
# v0.18 Released (Jan 2009) For 2.6.29-rc2
Re: (Score:3, Insightful)
a filesystem whose version begins with a zero doesn't get to be in the same room as my data, much less in charge of maintaining it
Re: (Score:2, Insightful)
Would it make you feel any better if the exact same code was labeled like this instead?
# v1.9 Released (June 2009) For 2.6.31-rc
# v1.8 Released (Jan 2009) For 2.6.29-rc2
Re: (Score:1)
Not much. Actually, I don't care much about version numbers, since there are lots of well-established products out there with version numbers
What matters, though, is code maturity. For any general application, we can afford to put up with a few bugs here and there. A filesystem, however, needs to be proved to be safe, since errors can easily be found only after your last good copy of a file has disappeared out of the b
Re: (Score:1)
Re: (Score:3, Insightful)
A file system whose version begings with zero means the author's don't feel like putting a one there. Nothing more.
That said, btrfs is still under heavy development, and the on-disk format hasn't been finalized. Avoid it for anything important, but not because of arbitrary version numbers.
Re: (Score:1, Informative)
bzzt
Most schemes use a zero in the first sequence to designate alpha or beta status for releases that are not stable enough for general or practical deployment and are intended for testing or internal use only. Alpha- and beta-version software is often given numerical versions less than 1 (such as 0.9), to suggest their approach toward a public "1.0" release
Re: (Score:3, Informative)
Alpha- and beta-version software is often given numerical versions less than 1 (such as 0.9), to suggest their approach toward a public "1.0" release
That's just your personal conception, conditioned by many years of commerical software development. Putting the '1.0' in is a totally arbitrary decision. Lots of Open Source projects are in perfectly stable, usable condition when in 0.x status. The Linux kernel itself was pretty stable in 0.9, with the only major changes between that and 1.0 being stabilizing the TCP/IP stack (IIRC).
Some projects don't even use that nomenclature; Gentoo just uses the date of release. On the opposite side of the fence, lots
Re: (Score:2)
Maybe that's because version numbers really don't mean much when it comes to distributions. Fedora 10, Ubuntu 9.04 or Debian 3.0 are merely ways to distinguish different versions of a distribution. Because distro's are so complicated and contain so much software (even small ones) you can't be sure that 3.0 will even have the same stuff as 2.0, while with single applications you can be quite sure that you'll get a decent impr
Re:Btrfs (Score:4, Insightful)
At 1.0 release it's supposed to be feature complete, free of show stopper bugs and reliable enough for regular use. Yes, there is some degree of legitimate disagreement as to exactly what that means, but not that much. It's a convention which people have largely agreed to because there needs to be some way of informing the user that this isn't quite ready for prime time. Adding features later on isn't an issue, but it does need to have all the features necessary to function properly.
Then there's ZFS on FreeBSD which is experimental and will be experimental until there's enough people working on it for the dev to feel comfortable with things being fixed in a reasonable time.
Re: (Score:1)
. Lots of Open Source projects are in perfectly stable, usable condition when in 0.x status.
Not only that, lots of Open Source projects are unusable, unstable condition when in 4.X condition!
Windows, for instance, was a sick joke in 1.0 and 2.0
IMHO, windows had to go up to "0x58,0x50" to stop being a sick joke.
Re: (Score:2)
Btrfs tends to perform best at Bennigan's.
Re: (Score:2)
South Park. Butters. His favorite restaurant in the world is Bennigan's.
Wait a second, What's up with SQL-lite test (Score:2)
Talk about optimization or lack of it. Take a look at the SQL lite test. EXT3 is something like 80 times faster than EXT4 or BTRFS.
What heck is going on!!!. Postgress SQL does not seem to show this performance enhancement.
really this is an insanely different score, to the effect that if it's real no one in the right mind would run SQL on anything but EXT3.
Something must be wrong with this test.
Re: (Score:2)
Same sort of weirdness shows up in the Mac 10.5.5 versus Ubuntu tests [phoronix.com]. all the test fluctuate a small amount except for the SQL-lite test in which the Mac creams ubuntu.
why does SQL lite show such extreme behaviour in file systems.
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
fsync()
Re: (Score:1)
BTW, since btrfs came from oracle, and it performs so poorly with sqlite and postgresql, I would be interested its performance with Oracle's own databases... oracle, Berkeley db, mysql... It would be interesting to see it runs well with Oracle RDBMS, but funny if it takes months to create the database (unitl 0.20 is out??)
Re:Another lame filesystem review (Score:4, Insightful)
Saying a SATA drive is not an SSD is borderline stupidity, but who's to say that it really matters.
Comparing filesystems under a certain condition is comparing filesystems.
Comparing filesystems on different conditions is NOT comparing filesystems.
Re: (Score:1)
Another lame filesystem comment (Score:5, Informative)
Btrfs includes support for TRIM on SSD, but that's a secondary addition. The main purpose of Btrfs is to compete against Sun's ZFS in the area of robust fault tolerance. If you look at the original announcement [lkml.org], you can see SSD support wasn't on the radar at all; that's strictly been an afterthought in the design. Btrfs is absolutely designed to work on SATA drives and to compete head to head against ext3/ext4.
Re: (Score:2)
So competing against ZFS and ext3/ext4. That is fairly low goals is you ask me.
Re: (Score:3, Informative)
NILFS2 and Btrfs are both TRIM file systems optimized for SSD media. Comparing them to other file systems on a SATA drive is borderline stupidity, because you would never use them on a SATA drive. Any more than comparing NILFS2 or Btrfs to eXT3 on a SSD would be.
This statement doesn't make any sense since SSDs can use both the original SATA and SATA II interfaces.
Re: (Score:3, Insightful)
Though I never understood why one would choose to use an SSD on a SATA interface. Using a medium that support parallel access over a serial interface doesn't seem all that logical to me..
Re: (Score:2)
"Parallel", in this case, doesn't usually mean parallel commands. It means it uses several wires to send a single command.
Implementing the electronics is easier on a serial connection. It's easier to jam the clock speed up than to add all the extra pins required on the ICs to support a parallel connection.
Mind you, SSD speeds are going to rise faster than the designed-by-committee SATA standard can keep up. It won't be long before SSDs are going to have be on the northbridge.
Re: (Score:2)
NILFS2 is made for SSD, but Btrfs isn't. NILFS2, because of how it stores files, should have a good read performance advantage due to their being no penalty for random access on SSD, and if I'm not mistaken its write speed should be fast on just about anything.
Re: (Score:1)
Until you fill up the drive and the garbage collector needs to kick in. From what I know, their garbage collector is currently very basic and unoptimized. It's probably going to take a while before we get the perfect filesystem for the old, cheap SSDs.
Re: (Score:2)
I believe the garbage collector runs the whole time in the background. So as long as you don't fill up the filesystem you will probably be alright.
Re: (Score:2)
The SSD benchmark is coming.
But never mind that, because TFA has some problems interpreting the data. If all the numbers are coming out the same, that indicates the bottleneck is somewhere other than IO. For instance, when requesting a small static file over Apache, the file is probably being fetched right out of the cache. This test might catch a few badly implemented filesystems or hard drive electronics, but the ones in the article might as well be thrown out.
Re: (Score:2)
For instance, when requesting a small static file over Apache, the file is probably being fetched right out of the cache. This test might catch a few badly implemented filesystems or hard drive electronics, but the ones in the article might as well be thrown out.
I stopped reading once I saw the Apache "benchmark." I guarantee it never touched the disk after the first time the small static file was read.
Re: (Score:1, Interesting)
Others have pulled you up on the SATA/SSD remark so I won't cover that again, but you are also a little confused about those filesystems being optimised for SSD.
If you read the NILFS page (http://www.nilfs.org/en/about_nilfs.html), it says nothing about SSD. It has features you might want on any storage, any benefits to SSD media is just a side effect.
Re: (Score:2)
Where I always saw the benefit for log-structued filesystems was in environments with lots of random writes and few reads (as the random writes will become sequential writes.) If you use it on a good SSD, however, I could probably safely remove the 'few reads' qualifier. Either way, I'm glad that Linux has one now.
JFS? (Score:2)
Kinda disappointed the article didn't discuss JFS. After running into the fragility of XFS, I tried it out, and it's highly robust, fast, and easy on the CPU.
Re: (Score:1)
Phoronix benchmarked JFS before:
* on a cheap SSD here: http://www.phoronix.com/scan.php?page=article&item=ubuntu_ext4&num=4 [phoronix.com]
* on an expensive SSD here: http://www.phoronix.com/scan.php?page=article&item=intel_x25e_filesystems&num=1 [phoronix.com]
The results were less than impressive, but they could be different in a HDD benchmark.
Re: (Score:2)
Re: (Score:2)
I have personally come to the conclusion that ext3 is a pile of dino droppings. Basically quota support in ext3 is just not robust.enough. Give me XFS, any day of the week. In fact the ability to support "project" quotas and hence directory quota's is much much better than ext3, or ReiserFS or JFS, or even ZFS (which does not do quotas at all).
Re: (Score:1)
Never used reiserfs in production, but XFS and ext3 are very good. XFS in (my) "realworld" worlkloads is the best by far (exception for mass deletes, which are slow). I don't understand why XFS scores so badly in these benchmarks.
Anyway, one should always test before deployment, if the fs is important, and benchmark if speed is important.
Re: (Score:2)
So it tends to be a bit slow for Maildir storage or multimedia storage.
Re: (Score:1)
Sorry to barge in, but generally if your hardware fails you have way bigger problems than the fs drivers. (I mean, the software tries to write ABC but a CPU/cache/RAM/Chipset/whatever error results in the hard drive receiving ABB, is only detectable by scrubing the data after the fact).
Assuming *good* hardware and ocasional crashes related to the software not doing the right thing, then yes. You should expect the fs to save most, if not all, of your data. XFS should do this.
Re: (Score:3, Interesting)
JFS has been in "bugfix mode" for some time.
Why is JFS the red-headed stepchild? (Score:5, Insightful)
Ok, I've been wondering this for a long time. IBM contributed JFS to Linux years ago, but no one ever seems to give it a thought as to using it. I used it on my computer for awhile, and I can't say that I had any complaints (of course, one person's experience doesn't necessarily mean anything). When I looked into the technical features, it seemed to support lots of great things like journaling, Unicode filenames, large files, large volumes (although, granted, some of the newer filesystems *are* supporting larger files/volumes).
Don't get me wrong - some of the newer filesystems (ZFS, Btrfs, NILFS2) do have interesting features that aren't in JFS, and which are great reasons to use the newer systems, but still, it always seems like JFS is left out in the cold. Are there technical reasons people have found it lacking or something? Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?
Re: (Score:2)
JFS has treated me very well for the last 2 years or so. It's fast when dealing with small files, unlike XFS. I've never noticed corrupted files after a hard boot, so I prefer it to EXT3. JFS also feels faster... of course, my perception isn't a benchmark.
I would love to see the next generation of filesystems catch on, though. I would really like my data to be automatically checksummed on my file server.
Re: (Score:2)
Because as far as IBM are concerned JFS is not very interesting. I would point out the fact that the DMAPI implementation on JFS has bit rotted, and IBM don't even support HSM on it on Linux. For that you need to buy GPFS, which makes ZFS look completely ordinary.
Re:Why is JFS the red-headed stepchild? (Score:5, Interesting)
Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?
I think because it's just not sexy.
But, as you say, if you look into it it supports all the buzzwords. I use it for everything, and IME it's an excellent, lightweight, unobtrusive filesystem that gets the job done while staying out of my way (which is exactly what I want from a filesystem). It would be nice if it supported things like filesystem shrinking, which is very useful when rearranging partitions, and some of the new features like multiple roots in a single volume are really useful and I'd like JFS to support this, but I can live without them.
JFS also has one really compelling feature for me: it's cheap. CPU-wise, that is. Every benchmark I've seen show that it's only a little slower than filesystems like XFS but it also uses way less CPU. (Plus it's much less code. Have you seen the size of XFS?) Given that I tend to use low-end machines, frequently embedded, this is good news for me. It's also good if you have lots of RAM --- an expensive filesystem is very noticeable if all your data is in cache and you're no longer I/O bound.
I hope it sees more love in the future. I'd be gutted if it bit-rotted and got removed from the kernel.
Re: (Score:2)
Word - I use JFS for all my major filesystems, even USB/Firewire drives. Works very well with VMware, and has a very fast FSCK as well.
Comparing Apples and Oranges (Score:3, Insightful)
All of the file systems are designed for specific tasks/circumstances. I'm too lazy to dig up what's special about each, but they are most useful in specific niches. Not that you _can't_ generalize, but calling ext4 the best of the bunch misses the whole point of the other file systems.
Re: (Score:2)
Shh. We want our choice of our default install to be the winner so we look like we are smarter then people who actually chose something else.
Re: (Score:2)
Could you elaborate what the niches are for each?
Would it be technically possible to compare benchmarks with the Windows implementation of NTFS and FAT? Despite having a different underlying kernel?
Re: (Score:1)
Yes, it is possible. You could use Captive NTFS [wikipedia.org] to employ the Windows filesystem implementation.
Do these benchmarks make any sense? (Score:4, Insightful)
The first benchmark on page 2 is 'Parallel BZIP2 Compression'. They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem? Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time. They then say which filesystems are fastest, but 'these margins were small'. Well, not really surprising. Are the results statistically significant or was it just luck? (They mention running the tests several times, but don't give variance etc.)
All benchmarks are flawed, but I think these really could be improved. Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else - unless you believe in the possibility of some magic side-effect whereby the processor is slowed down because you're using a different filesystem. (It's just about possible, e.g. if the filesystem gobbles lots of memory and causes your machine to thrash, but in the real world it's a waste of time running these things.)
Re: (Score:2)
unless you believe in the possibility of some magic side-effect whereby the processor is slowed down because you're using a different filesystem. (It's just about possible, e.g. if the filesystem gobbles lots of memory and causes your machine to thrash, but in the real world it's a waste of time running these things.)
Some filesystems have higher CPU usage - aside from issues of data structure complexity, btrfs does a load of extra checksumming, for instance.
But your point stands that CPU-bound benchmarks are probably not the best way of measuring a filesystem. It would be interesting to measure CPU usage whilst running a filesystem-intensive workload, or even to measure this indirectly through the slowdown of bzip2 compression whilst running a filesystem-intensive workload in the background.
Re:Do these benchmarks make any sense? (Score:5, Insightful)
The first benchmark on page 2 is 'Parallel BZIP2 Compression'. They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem? Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time. (...) Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else.
That's one type of benchmark. But you also want a benchmark that shows the performance of CPU-intensive appliations while the file system is under heavy use. Why? because the filesystem code itself uses CPU, and you want to make sure it doesn't use too much of it.
Re: (Score:2)
You do want that, but I'm pretty sure that bzip2 isn't it. Compressing a file is actually pretty light work for the filesystem. You need to read some blocks sequentially, then write some blocks sequentially. Compressing lots of small files is better, but the access is still likely to be pretty one-at-a-time. More challenging would be a task that needs to read and write lots of f
Re: (Score:2)
A processor-intensive test will show which filesystem has the most overhead WRT the processor. And as the test shows, they're all pretty much the same in that regard.
Re: (Score:1)
Only if it's a filesystem-processor-intensive test, that is, you are making the filesystem work hard and (depending on how efficient it is) chew lots of CPU. Giving the filesystem easy work, while running something CPU-intensive like bzip2 separately, is a good benchmark for bzip2 but it doesn't tell you much about the fs.
Re: (Score:1)
Re: (Score:2)
I'd argue that this is true only if they don't disclose their biases and limitations of testing methodology.
Re: (Score:2)
They also said "All mount options and file-system settings were left at their defaults", and I struggled to see what the point is of doing performance tests to find the fastest file system if you are not going to even attempt to get the best performance you can out of each filesystem.
Why not do a test that just uses dd to do a straight read from a target hard drive to a file(s) on the target filesystem to eliminate *any* variation
Re: (Score:1)
BUTTerfs guys
Your fans will be at a huge disadvantage in flamewars
Re: (Score:1)
Who's stripping? (Score:3, Funny)
NILFS2 is pretty interesting (Score:5, Interesting)
NILFS2 (http://www.nilfs.org/en/) is actually a pretty interesting filesystem. It's a log-structured filesystem, meaning that it treats your disk as a big circular logging device.
Log structured filesystems were originally developed by the research community (e.g. see the paper on Sprite LFS here, which is the first example that I'm aware of: http://www.citeulike.org/user/Wombat/article/208320 [citeulike.org]) to improve disk performance. The original assumption behind Sprite LFS was that you'll have lots of memory, so you'll be able to mostly service data reads from your cache rather than needing to go to disk; however, writes to files are still awkward as you typically need to seek around to the right locations on the disk. Sprite LFS took the approach of buffering writes in memory for a time and then squirting a big batch of them onto the disk sequentially at once, in the form of a "log" - doing a big sequential write of all the changes onto the same part of the disk maximised the available write bandwidth. This approach implies that data was not being altered in place, so it was also necessary to write - also into the log - new copies of the inodes whose contents were altered. The new inode would point to the original blocks for unmodified areas of the file and include pointers to the new blocks for any parts of the file that got altered. You can find out the most recent state of a file by finding the inode for that file that has most recently been written to the log.
This design has a load of nice properties, such as:
* You get good write bandwidth, even when modifying small files, since you don't have to keep seeking the disk head to make in-place changes.
* The filesystem doesn't need a lengthy fsck to recover from crash (although it's not "journaled" like other filesystems, effectively the whole filesystem *is* one big journal and that gives you similar properties)
* Because you're not repeatedly modifying the same bit of disk it could potentially perform better and cause less wear on an appropriately-chosen flash device (don't know how much it helps on an SSD that's doing its own block remapping / wear levelling...). One of the existing flash filesystems for Linux (JFFS2, I *think*) is log structured.
In the case of NILFS2 they've exploited the fact that inodes are rewritten when their contents are modified to give you historical snapshots that should be essentially "free" as part of the filesystem's normal operation. They have the filesystem frequently make automatic checkpoints of the entire filesystem's state. These will normally be deleted after a time but you have the option of making any of them permanent. Obviously if you just keep logging all changes to a disk it'll get filled up, so there's typically a garbage collector daemon of some kind that "repacks" old data, deletes stuff that's no longer needed, frees disk space and potentially optimises file layout. This is necessary for long term operation of a log structured filesystem, though not necessary if running read-only.
Another modern log structured FS is DragonflyBSD's HAMMER (http://www.dragonflybsd.org/hammer/), which is being ported to Linux as a SoC project, I think (http://hammerfs-ftw.blogspot.com/)
Re: (Score:2)
This is all well and good, but how about having some real features
* Robust bullet proof quota system
* Directory quotas
* Shrinkable online
* Clusterable
* DMAPI for HSM with a working implementation.
* Storage pool migration so I can mix SATA and SAS/FC in the same file system and do something useful with it.
* Ability to continue functioning when one or more disks is "gone" temporarily or permanently from th
Dubious (Score:5, Insightful)
I suspect their test methodology isn't very good, in particular the SQLite tests. SQLite performance is largely based on when commits happen as at that point fsync is called at least twice and sometimes more (the database, journals and containing directory need to be consistent). The disk has to rotate to the relevant point and write outstanding data to the platters before returning. This takes a considerable amount of time relative to normal disk writing which is cached and write behind. If you don't use the same partition for testing then the differing amount of sectors per physical track will affect performance. Similarly a drive that lies about data being on the platters will seem to be faster, but is not safe should there be a power failure or similar abrupt stop.
Someone did file a ticket [sqlite.org] at SQLite but from the comments in there you can see that what Phoronix did is not reproducible.
Re: (Score:2)
Here's a post [slashdot.org] linking to some other posts discussing some problems with the Phoronix benchmarking methodology. The same issues seem to be pointed out every time they get a benchmark article published on Slashdot.
So what - speed is not all in a file system (Score:2)
So what - when was still using Linux a working backup (incl. ACL, Xattib etc. pp) was the most important criteria and XFS came up on top. xfsdump / xfsrestore has save the day more then once.
Yet another content-free Phoronix fluff article (Score:5, Informative)
Skip TFA - the conclusion is that these benchmarks are invalid.
At least they've improved since last time - they no longer benchmark filesystems using a Quake 3 timedemo.
Re: (Score:2, Interesting)
Re: (Score:2)
Using an outdated version of Btrfs with known performance issues, using different settings for ext3 and ext4. Those are the ones that stand out, but the people in their forums do a good job of ripping apart nearly every benchmark they do.
Sexier technology (Score:2)
Performance should not be determinant! (Score:1)
I'm surprised the filesystem is tested at all (Score:5, Insightful)
Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB). Are they testing the filesystems or the buffer cache? I don't see any indication that any of these filesystems are mounted with the "sync" flag.
Re: (Score:2, Insightful)
Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB). Are they testing the filesystems or the buffer cache? I don't see any indication that any of these filesystems are mounted with the "sync" flag.
Yup, obviously they're mounting all filesystems with default settings, which can clearly be misleading. Furthermore, testing on a single 250 GB SATA drive maybe isn't that meaningful. What they're benchmarking is desktop performance, for obviously server oriented FS like XFS, BTRFS and NILFS that simply doesn't make sense.
NILFS2 is great for write-heavy workloads (Score:2)
At least according to some rough microbenchmarking I've done myself [luaforge.net]. My workload is to write raw CSV to disk as fast as possible. In testing, NILFS2 was nearly 20% faster than ext3 on a spinning disk.
It was also smoother. Under very heavy load ext3 seemingly batched up writes then flushed them all at once, causing my server process to drop from 99% to 70% utilisation. NILFS seemed to consume a roughly constant percentage of CPU the whole time, which is much more in line with what I want.
NILFS2 is not for ev
BTRFS or ZFS or .... (Score:1)
Some other say [storagemojo.com], now that Oracle owns Sun, Oracle can change the license of ZFS from CDDL [sun.com] to GPL2 [gnu.org] and port to Linux. But porting ZFS to Linux it's another story [sun.com]...