Forgot your password?
Data Storage Software Linux

Benchmarking Linux Filesystems Part II 255

Posted by Zonk
from the some-of-this-content-may-be-inappropriate-for-young-readers dept.
Anonymous Coward writes "Linux Gazette has a new filesystem benchmarking article, this time using the 2.6 kernel and showing ReiserFS v4. The second round of benchmarks include both the metrics from the first filesystem benchmark and the second in two matrices." From the article: "Instead of a Western Digital 250GB and Promise ATA/100 controller, I am now using a Seagate 400GB and Maxtor ATA/133 Promise controller. The physical machine remains the same, there is an additional 664MB of swap and I am now running Debian Etch. In the previous article, I was running Slackware 9.1 with custom compiled filesystem utilities. I've added a small section in the beginning that shows the filesystem creation and mount time, I've also added a graph showing these new benchmarks." We reported on the original benchmarks in the first half of last year.
This discussion has been archived. No new comments can be posted.

Benchmarking Linux Filesystems Part II

Comments Filter:
  • by toofast (20646) * on Friday January 06, 2006 @01:36PM (#14410134) Homepage
    An interesting analysis in every aspect, and it's fine and dandy for the person who uses 400 GB drives and a ATA controller on a 500MHz computer but I'd like to see how the filesystems compare on a bigass RAID system run by a Power5 server, or a few Itaniums that usually have with a few hundred connected users. Something a bit more "entreprise" - where the choice of a filesystem is a bit more critical than a small server or a home PC.
    • I'd like to see how they perform on a 12 GB Disk on a P2 266. You really start to see the differences when working on older hardware.
      • Reiser4 kills old disks, supposedly. I (mistakenly) used it on my (at the time) year old laptop, and after about 4-6 months I kept getting something like "drive seek complete" errors. It got to the point where it wouldn't boot because of the errors. So I had to reinstall everything, and no data could be saved since reiser4 hated my drive. Been running Reiser3 ever since, and I haven't had any problems.... yet.
        • It is completely absurd for a filesystem to kill a disk. If you were getting those errors (with the "drive ready" and "seek complete" bits being set being most common) it *strongly* suggests that either your disk is broken or it is improperly powered.

          If you're actually using that disk, still, have a look at it with smartctl. In particular, run "smartctl -t long" on it, and have a look at the results. If it doesn't pass that, don't even think of trusting it with your data.
          • 'Kill' was a little strong for how I meant to use it. What I really meant to say (and can now find any data backing me up) is that Reiser4 deals with the disk so intensely that it uncovers flaws and errors that other filesystems may (A) never find, or (B) live with.

            I'd look through the Namesys page, but it's large and the TOC didn't reveal any warnings.. or I wasn't looking hard enough.
      • by H4x0r Jim Duggan (757476) on Friday January 06, 2006 @02:18PM (#14410467) Homepage Journal
        Reiser is not designed for slow CPUs. AFAIK, a key part of the design was the Hans Reiser realised that CPUs were vastly underused. IO resources were maxed out and CPUs were sitting idle. So he found ways to use the CPU to make more efficient use of the IO resources. So this benchmark on a 500Mhz machine will of course show Reiser in a bad light, and moving lower down to a 266Mhz will make it even worse.

        For a decent benchmark of how filesystems work on modern hardware: use modern hardware.
        • What this says to me, is to never use Reiser on a DB machine. Sure, the disk churn is much more prevalent on such a beast, but the CPU(s) aren't exactly sitting around idle, either.

          It actually sounds like Reiser would do really well as a disk controller in a dedicated drive array. I wonder if anyone has put embedded Linux on such a device, to act as a Reiser RAID controller...
        • by captain_craptacular (580116) on Friday January 06, 2006 @04:12PM (#14411446)
          So this benchmark on a 500Mhz machine will of course show Reiser in a bad light, and moving lower down to a 266Mhz will make it even worse.

          If you look at the charts, the "editing" doesn't help either. For example one cpu usage chart showed a range starting @ 92% and ending @ 94%. The Rieser4 bar was 3x as long as the next bar, but guess what, it was using something like .7% (ie 93.7% as opposed to 93%) more CPU. If the scale hadn't been jacked up you wouldn't have been able to spot the difference at all, but they way they chose to present the data, it looked like a total smackdown.
        • But the difference is HUGE - utilizing more CPU power, reiser4 underperforms every filesystem discussed. Do you say the situation would be reversed if running on a better CPU? How much better we are talking here? I mean if the difference would have been small, I could expect some improvement if moving to a more modern hardware, but there is a really big gap between ext2/ext3 (+XFS) performance and Reiser4 - and 500Mhz is not that slow, especially doing nothing else but copying files! Let's assume that reise
    • by (653730) on Friday January 06, 2006 @03:01PM (#14410842)
      I'm *sick* of reading filesystem benchmarks of people who doesn't even care about even reading the documentation of the filesystems they compare

      OK, so ext3 is not the fastest filesystem on earth. But it has some default options which makes it suck even more than it usually do, and those options are *documented* in Documentation/filesystem/ext3.txt

      * Ext3 does a sync() every 5 seconds. This is because ext3 developers are paranoid about your data and prefers to care about your data than win on benchmarks. Syncing every 5 seconds ensures you don't lose more than 5 seconds of work but it hurts on benchmarks. Other filesystems don't do it, if you are doing a FAIR comparison override the default with the "commit" mount option

      * ext3's default journaling mode is slower than those from XFS, JFS or reiserfs, because it's safer. When ext3 is going to write some metadata to the journal, it takes care of writting to the disk the data associated to that metadata. XFS and JFS journaling modes do *not* care about this, neither they should, journaling was designed to keep filesystem integrity intact, not data, ext3 does it as an "extra", and it's slower because of that. But if you want to do a fair comparison, you should use the "data=writeback" mount option, which makes ext3 behave like xfs and jfs WRT to journaling. Reiserfs default journaling mode is like XFS/JFS, but you can make it behave like the ext3 default option with "data=ordered"

      ext3 is not going to beat the other by using those mount options, but it won't suck so much, and the comparison will be more fair. And remember: ext3 tradeoffs data integrity for speed. There's nothing wrong with XFS and JFS, but _I_ use ext3.
      • Hmmm.... I don't unerstand your outburst: "OK, so ext3 is not the fastest filesystem on earth" - what? I looked at the benchmarks, and it appears that ext2/3 wins every single test that matters to me: find dirs, files, untar, tar, copy tarballs(s), the kernel source tree (ext2/3 now outperforms reiser3). I understand your points, and everything you wrote is true, but there is no need to defend ext3, for it performs admirably (did you RTFA?).
      • May be I am misinterpreting the data somehow, but from a quick look at the article EXT2/3 is performing quite well.

        touch files - slowest
        find files - fastest
        remove files - fastest
        make directories - slowest
        find directories - second best
        remove directories - best
        copy tarball to cur disk - middle of the pack
        copy tarrball to other disk - middle of the pack
        untar kernel - fastest
        tar kernel - second best
        remove kernel sources - fastest
        copy tarball - fastest
        create 1GB file - fastest
        copy 1GB file - fastest
        spilt 100MB -
    • You raise an excellent point, though the only way to get ahold of enough hardware to make that test interesting is to get the system vendor to provide the hardware, in which case you often have limited ability to publish any results they don't like. (Been there, didn't publish that)

      Furthermore, once you get into that high-end of a system, you're generally not all that interested in "general purpose" benchmarks. I have a lot of experience benchmarking filesystems on high-end systems. (15GBytes/s and so on) I
    • by hackstraw (262471) * on Friday January 06, 2006 @04:06PM (#14411400)
      I would rather see these benchmarks on a computer less than 5 years old. I would also appreciate an open source version of the tests so they could be reproduced. For ease of reading, I think the article should be on a separate page on the site as well.

      I've got a screaming Dell 1.6 GHz P4 to test with and here are my results for a couple of tests it only has ext3 and a whatever cheap harddrive came with the box. I'm not sure if dma is enabled or if I've done any hdparam tunings, but I'm not sure of their test system either:

      my touch 10,000 files: 24.314 seconds theirs 48.25

      I used a shell script that called /usr/bin/touch

      Now if I use a Perl open() call, I get 8.887 seconds
      Now with a cheesy C that uses fopen() and fclose() I get 4.639 seconds

      my make 10,000 directories: 56.832 seconds theirs 49.87

      that is a shell script

      If I user perl, I get 35.171 seconds

      The /dev/zero stuff is completely bogus. No indication of the blocksize that was used.

      The copy kernel stuff to and from a different slower disk with an unknown filesystem on it is useless.

      The split tests are not indicative of anything in real life, and they took on order of between 60 seconds and 130 seconds to perform on their 500MHz system with most being in the 130 second range. I got 16.547 seconds.

      I do not see how any relevant information can be obtained from this article. I'm disappointed in the Linux Gazette and Slashdot for printing this information.

  • by Conor Turton (639827) on Friday January 06, 2006 @01:42PM (#14410190)
    One thing this does show is that you need to be very careful to match the filesystem type to the main tasks the PC is going to be used for. Personally, there's no real clear winner as all have major gains or deficiencies in some areas. One very interesting point was the vast difference in the amount of available space after a partition and format between the different filesystems.
    • I would agree (Score:2, Informative)

      by jd (1658)
      From a brief examination of the benchmarks, I'd say the following would seem to hold up:
      • JFS: Great for software development, as it allows rapid file and directory reads, writes, creates and deletes
      • XFS: Seems to work best with much more stable content. Creating and mounting the partition is also fast, and the FS overhead seemed low. Should be good for static databases, particularly if you're going to use a network filing system to access the drive, say using a SAN.
      • Reiser4: Surprisingly, I didn't see Reiser4
      • Re:I would agree (Score:5, Interesting)

        by lawpoop (604919) on Friday January 06, 2006 @02:16PM (#14410450) Homepage Journal
        I'm no expert by any means, but I think the idea behind the ReiserFS is breaking down the FS paradigm from the file level to the line level.

        There is the classic example from the Reiser website. If your password file gets hacked, you have to ditch the whole file if you're using traditional file systems. You only know whether or not the file's been changed. However, with the Reiser system, it can tell you *what line*, and thus which user/password, was changed.

        That's just a taste of where you can go with the ReiserFS. There are other things coming down the pipe; check out the reiser website for a better idea of the new features that ReiserFS promises.
      • It seemed to be either first or second at most of the benchmarks. I really don't consider that mediocre.

        I was pretty surprised by ext3's performance. I also read the article.
        • I have personally had to deal with the results of forgetting to change from EXT3 to something else when setting up one of our servers. Took a year, but one of the database files reached that magical compiled-in limit of 4GB... Fortunately, I caught it shortly after it happened, and was able to rearrange things to keep the server from too far out of sync with the rest of the cluster.

          EXT3 has a lot going for it, but the default compile options (at least the ones used by several of the popular packagers) make

      • Re:I would agree (Score:5, Insightful)

        by Anonymous Coward on Friday January 06, 2006 @02:26PM (#14410546)
        Ext2/Ext3: Mediocre at almost everything. Distros like Fedora that mandate the initial install ONLY use Ext3 are being stupid. The best fall-back filing systems if you can't find anything better for what you want the partition to do, but should never be used in specialized contexts.

        Huh? Sorry, did you read the same graphs or are you just trolling?

        This article shows that ext2 and ext3 are close to the top performer in most tests and do not have many "worst-case scenarios" (unlike, e.g. Reiser3 and Reiser4).

        If there is anything that you can conclude after reading this study, it is that ext3 is a reasonably good default choice for a filesystem.

      • What I take away from these benchmarks is that Ext3 is still the most reasonable choice: mature, well supported, and good overall performance.

        JFS, XFS, and ReiserFS are small players with a fraction of the user community and a fraction of the tools and support; their performance would have to be astounding in comparison to Ext3 to even consider them, but it isn't.

        Unfortunately, benchmark-happy people like you, people who optimize for the wrong thing, are far too frequent in this industry.
        • I think XFS supports caced block access (using DMAPI if I recall. This helps with low level access, so that the dump utility can operate on a live file system (although write activity during the dump could still cause inconsitency if a snapshot is not used). Ext2FS/Ext3FS don't have such support as far as I know.
      • Re:I would agree (Score:2, Redundant)

        by david.given (6740)
        Reiser4: Surprisingly, I didn't see Reiser4 really shine at a whole lot in the benchmarks. The massive mount time tells me it needs to be a local drive that only needs mounting the once. Just not sure what sort of data would be best on it.

        I think the ReiserFS mount times in the benchmark are misleading. From my experience, mkreiserfs creates an extremely basic file system; the first time you mount it, the file system driver itself will do a lot of heavy housekeeping, which takes ages. Subsequent mounts ar

      • Re:I would agree (Score:3, Interesting)

        by m50d (797211)
        Reiser4: Surprisingly, I didn't see Reiser4 really shine at a whole lot in the benchmarks. The massive mount time tells me it needs to be a local drive that only needs mounting the once. Just not sure what sort of data would be best on it.

        Reiser4 now defaults to journalling everything - file data as well as metadata. If they left it like that, then no wonder it's slower - but it's the best choice if data integrity is important.

        • Reiser4 now defaults to journalling everything - file data as well as metadata. If they left it like that, then no wonder it's slower - but it's the best choice if data integrity is important.

          Best choice for you, perhaps. If data integrity is important, then reiserfs is the last place I'd be looking. I'd be going with ext3 with data journalling enabled.

          • I meant best choice of those tested. I'd certainly like to see benchmarking of reiser4 against ext3 with data journaling.
        • Please, don't use the words "ReiserFS" and "data integrity" in the same sentence.

          Reiser eats filesystems like popcorn. I have used it for around a couple of months on two boxes, and in both cases every file bigger than around 4KB went to hell; in one case on the whole filesystem and in a big subtree in the other. I'll be damned if I ever give it another try, especially considering that other FSes trump it speedwise as well.

          Why? ReiserFS has an order of magnitude more code than ext3, and more than twice a
          • Reiser eats filesystems like popcorn. I have used it for around a couple of months on two boxes, and in both cases every file bigger than around 4KB went to hell; in one case on the whole filesystem and in a big subtree in the other.

            Well, that's the opposite of my experience. When I got fed up with fsck times with ext2, I tried ext3 only to have it unreadably corrupted within a few months. Since then I've used reiser on every system I have, with no problems (including the same disk that was trashed by ext3

            • I have to second the GP. I've used ext3 and Reiser3 alternately on my laptop, and my Reiser3 experiences have usually ended in disaster. From a refusal to mount to Reiser Windows programs not copying stuff over correctly, I've learned to stick to ext3 if you want to be safe.
        • Re:I would agree (Score:3, Interesting)

          by Shelled (81123)
          "Reiser4 ..... it's the best choice if data integrity is important."

          Any time I've lost a drive to data corruption it was formatted Reiser, every attempt at using Reiser eventually resulted in massive data corruption. This was various hardware and distros. I don't know about the newest version but trice bitten forever XFS for me.

      • Re:I would agree (Score:3, Insightful)

        by smoker2 (750216)

        Ext2/Ext3: Mediocre at almost everything. Distros like Fedora that mandate the initial install ONLY use Ext3 are being stupid. The best fall-back filing systems if you can't find anything better for what you want the partition to do, but should never be used in specialized contexts.

        How the hell did you come up with that opinion ?

        Ext3 came 1st or 2nd in 24 out of the 40 tests done. If you were producing an OS for general purpose computing, would you use a specialist fs or the best performing general purpose

      • Distros like Fedora that mandate the initial install ONLY use Ext3 are being stupid

        It's amazing that such commentaries are moderated interesting these days. So, uh, fedora developers are stupid and you're smarter than them?. Please take a look at this [] commentary to understand why such decisions aren't so simple. You can tune your car's engine and it'll be faster, right? But why not everybody tunes their engines?

        Let me quote a ext3 paper: "The ext2 and ext3 filesystems on Linux are used by a very large numbe
      • by LWATCDR (28044)
        All the tests where done on a 500 mhz PIII machine.
        Not exactly what I would call state of the art. The test results seem valid for a home server that you built out of left over parts but not for much else.
        Did he compile the FSs himself? If so what optimizations did he use with the compiler.
        I don't get the importance of deleting thousands of directories. Do you do that all that often? Why would you?
        What was the point of the test? What environment where they trying to test for?
        Home server?
        Small office
    • by Raphael (18701) <> on Friday January 06, 2006 @02:19PM (#14410476) Homepage Journal
      One very interesting point was the vast difference in the amount of available space after a partition and format between the different filesystems.

      Unfortunately, that graph is rather misleading. The ext2 and ext3 filesystems keep some percentage of the disk space as "reserved" and only root can write to this reserved area. This is useful if the disk contains /var or other directories containing log files, mail queues and other stuff. Even if a normal user has filled the disk to 100%, it is still possible for some processes owned by root to store some files until an administrator can fix the problem. On the other hand, if your filesystem contains only /home or other directories in which users are not competing for disk space with processes owned by root, then it does not make much sense to have a lot of disk space reserved for root. That is why you should think about how the filesystem is going to be used when you create it, and set the amount of reserved space accordingly.

      The default behavior for both ext2 and ext3 is to reserve 5% of the disk space for root. You can see it in the section Creating the Filesystems from the article:

      4883860 blocks (5.00%) reserved for the super user
      You can change this behavior with the -m option, specifying the percentage of the disk space that is reserved. The article did not mention how the filesystem was supposed to be used if it had been used in production. However, I would guess that the option -m 0 or maybe -m 1 could have been used in this case. This would have provided a fair comparison and suddenly you would have seen all filesystems in the same range (close to 373GB available), except maybe for Reiser3.
      • except you don't want to do this. As disks approach full, the contigious stretches of free-space approach lenght zero, due to fragmentation. This is true on all filesystems. The result of this is that space allocation on a 98% full disk is much much slower than on a 2% full disk. With disks as cheap as they are, one shouldn't be sitting around with 95% full disks. If that's the case, there are work-flow/administration issues that need to be worked out, rather than unlocking that last little bit of space.

  • Hardware mismatch (Score:5, Interesting)

    by lostlogic (831646) on Friday January 06, 2006 @01:52PM (#14410260) Homepage
    It is widely known that Reiser filesystems are heavy on CPU usage 4 more than 3. These benchmarks seem to show a CPU bound IO situation as opposed to an IO bound IO situation. As an earlier comment pointed out, the hardware used in this test was a 500mhz CPU. My slowest computer is a 1000mhz system, which is usually IO limited, not CPU limited. I'd be interested to see these same benchmarks run on real hardware, or some more complex benchmarks (random RW, DB load, etc.). The hardware used for this test would be suitable for a fileserver, but not much else. In that situation, E2, E3 or XFS are probably the right choices as it points out. What about desktop loads, enterprise loads, or something more interesting?
    • Re:Hardware mismatch (Score:3, Informative)

      by Hextreme (900318)
      This was definitely an issue in testing here. The wide range of "winning" filesystems for the different tests clearly indicates the bottleneck is somewhere other than the disk. In most modern systems, this isn't an issue.

      From TFA: ReiserFS takes a VERY long time to mount the filesystem. I included this test because I found it actually takes minutes to hours mounting a ReiserFS filesystem on a large RAID volume.

      Looks like this guy makes a habit out of using systems with 500MHz CPUs... my dual 3GHz xeon box m
  • by CastrTroy (595695) on Friday January 06, 2006 @01:54PM (#14410279) Homepage
    Here's what's missing. They forgot to tell you how well the drive performed after being used for 1 year, and having constantly moved data from one place to another, and constantly deleting and creating new data. It would have been a better test if the drive was about 75% full, with data from 2 years of use, and then the same tests were performed.
    • Good idea. You should get right on that. Don't forget to keep accurate logs as well as make us pretty graphs to show us how well each filesystem performs?

    • Worse--it doesn't say what mount paramaters are used, or if any tuning was done. You can change the performance characteristics significantly if you tune the paramaters of the mount. I suspect that reiser4 was in a failsafe mode for data integrety, while the others were doing a bit more caching.
      • Given that filesystem creation was shown, we can probably safely assume that no tuning was done, and that if he had specified mount options, he probably would have showed us those, too... though that last part is in a bit more question.
    • You're absolutely correct, as free-space fragmentation can play a HUGE role in the speed of space allocation. Of course, this plays no role at all in stat, rename, remove, readdir, operations, or any reads or any writes to existing parts of files.

      Since the benchmarks presented are so rudimentary anyway, this is maybe not the first thing to worry about.
  • How about some SATA benchmarks? PATA is good, but I suspect things will be much improved with SATA and NCQ. Does anyone have any links?
    • Re:SATA? (Score:3, Informative)

      by MarcQuadra (129430) *
      IIRC NCQ isn't 100% fully-baked on Linux yet, so even NCQ-capable controllers and drives won't take advantage of it yet. I just upgraded my home file server with NCQ-capable gear and I don't think it's using it yet, even though I'm running the latest kernel.

      There are patches for libATA that enable NCQ, but they're not in the mainline yet.

      The only thing worse than testing without the new technologies would be testing with half-baked implementations of them. Let's wait until NCQ is done before we try testing
    • I want to know the SANTA benchmark. How did he travel all over the world and when will he not be able to handle anymore?
  • Warning (Score:3, Informative)

    by c0dedude (587568) on Friday January 06, 2006 @01:57PM (#14410315)
    Remember, fastest!=best. Some filesystems cannot shrink. Some cannot change size at all. If you're doing anything with LVM or RAID, generally ext3 is the way to go. If you're just formatting a disk and using it without anything on top of it, these FS's may be for you. Then again, ext3 looks damn good in the tests as stands. XFS looks like the clear loser.
    • Re:Warning (Score:4, Insightful)

      by drinkypoo (153816) <> on Friday January 06, 2006 @03:01PM (#14410839) Homepage Journal
      XFS does things that ext? and Reiser can't do. Reiser does things other FSes don't do as well. It's a true 64-bit filesystem and it supports insanely large filesystems, up to 9 million terabytes in 64 bit mode (with a 64 bit kernel.) It even provides realtime support, although I guess that's still beta in linux? It can be defragged and even dumped while live. It has insanely quick crash recovery. And of course, it does other stuff too; check the project page []. XFS may not be the fastest filesystem - it may even be the slowest - but it's got features no other filesystem has. If you need them, XFS is the winner. Hell, if you just trust XFS more than you trust other filesystems, it's the winner. (Sorry, but I wasn't sleeping when reiser was eating everyone's data, and ext3 handles corruption much more poorly than any of the other Journaled options.)
      • Re:Warning (Score:3, Interesting)

        by bani (467531)
        You can't fsck an xfs mounted filesystem, even if it's mounted read-only. If your root fs gets damaged and you need to fsck it, you need to boot from a rescue CD. If it's a server in a remote location, you're shit outta luck.

        ext3 and reiser at least let you fsck read-only mounted filesystems.

        I brought up this problem to xfs developers and their response was "well, it's not a problem on SGIs so we're not going to fix it". Nice.
        • not SOL with the right server grade hardware, just upload the CD ISO image to the remote management board on the server and boot from it.
          • so you have to buy special management hardware just to support xfs, that no other filesystem requires. nice.
      • Reiser does things other FSes don't do as well. It's a true 64-bit filesystem and it supports insanely large filesystems, up to 9 million terabytes in 64 bit mode (with a 64 bit kernel.)

        I don't know about the setup being tested, but when I ran Reiser on a very large (many Tb) file system I discovered it gets slower the larger the filesystem, after a while its simply to slow to use. So while it may "support" large file systems, I'm betting no one has plugged it into a 50Tb file system to see if it really wo

      • Amen to that!

        XFS is a mature, stable, and very versitile filesystem. This FS shines best when used with fast disks and battery-backed caching RAID controllers. I am using it quite successfully with Slackware 10.1, and cheap IDE RAID controllers for homebrew NAS, as well as a PostgreSLQ server. SGI was very generous in releasing XFS source and dedicating resources to the OSS community. The CXFS, or Cluster XFS version of this filesystem would rock Linux if/when it becomes available.
  • by Clover_Kicker (20761) <> on Friday January 06, 2006 @02:00PM (#14410329)
    I love the CPU utilization graph for "touch 10,000 files".

    A quick glance shows ReiserV4 as much more CPU intensive, you have to look at the scale to realize it only used 0.3% more CPU.

  • somewhat worthless (Score:5, Insightful)

    by aachrisg (899192) on Friday January 06, 2006 @02:06PM (#14410375)
    His benchmark data is ruined by using a gross unrealtistic piece of hardware - modern fast hard disks coupled with a cpu which is absurdly slower than anything you can buy.
  • Sample size (Score:2, Insightful)

    by rongage (237813)

    Am I reading this "benchmark" correctly? Did he base his results on a sample size of 1?

    At the very least, you run multiple times and average the results to give statistically meaningful numbers. I can't think of ANY time where a sample size of 1 was meaningful for anything.

    What would be really interesting is to come up with a reasonable UCL and LCL for each test, and then calculate out a cpK for each test. It's one thing to say "I got these results one time", it's something much more impressive to say

    • Am I reading this "benchmark" correctly? Did he base his results on a sample size of 1?

      No you're not, and no he didn't. FTFA:

      NOTE5: All tests were run 3 times and the average was taken, if any tests were questionable, they were re-run and checked with the previous average for consistency.
    • Am I reading this "benchmark" correctly? Did he base his results on a sample size of 1?

      At the very least, you run multiple times and average the results to give statistically meaningful numbers. I can't think of ANY time where a sample size of 1 was meaningful for anything.

      Why isn't there a -10 Wrong moderation option?

      From the weak FA:

      NOTE1: Between each test run, a 'sync' and 10 second sleep
      were performed.
      NOTE2: Each file system was tested on a cleanly made
  • by bhirsch (785803) on Friday January 06, 2006 @02:15PM (#14410437) Homepage
    There were some current (recent 2.6 kernel with XFS, JFS, possibly Reiser4, etc) benchmarks done on highend servers (or at least something with drives a few steps up from the CompUSA weekly special), especially if anyone wants to see Linux succeed in the enterprise.
  • Normalized results (Score:4, Informative)

    by dtfinch (661405) * on Friday January 06, 2006 @02:15PM (#14410441) Journal
    Based on the geometric mean of all the benchmark times for each filesystem, which effectively weights all benchmarks equally:
    JFS won
    EXT2 and EXT3 took 17% longer than JFS
    XFS took 29% longer than JFS
    Reiser3 took 38% longer than JFS
    Reiser4 took 52% longer than JFS

    Now, 1.52 seconds is not a whole lot longer to wait than 1 second. With any luck we'll see a post from Hans explaining why Reiser4 took longer, or what sacrifices were made to make the others faster, if there are any.
    • by phoenix.bam! (642635) on Friday January 06, 2006 @02:44PM (#14410702)
      Reiser uses much more CPU for file system tasks. ReiserFS is a modern filesystem meant to run on modern machines. This machine is only 500mhz and therefore Reiser performs poorly. Had this machine been a 2ghz (standard now, 4x faster than the test machine), or even a 1ghz (Outdated and 2x as fast) machine Resier would have performed much better.

      If you want to use parts from 1997 to build a computer, Reiser is not for you. 500mhz is at least 8 year old technology if I remember correctly.
      • About six years. 500 MHz processors came out in late 1999.
        • Forgive me sir, but I seriously do not think so.

          Celeron 300A was Spring/Summer of `98. I cant imagine it took Intel a full 18 months to go from 450 mhz on the cheap end to 500 mhz on the fast end.

          Those were the days. I still have a couple old Alpha heatsinks with what must've been an AMAZING 50 mm fan, weighing almost a whole third of a pound of solid aluminum. HA, fantastic. Two of the BP6's I built are still alive and kicking today. Dual powered celerons, whoo.

          Compare that to my <a href="http://ww
      • I like it when my system can run at 0-5% cpu usage during even the most intense disk activity, with minimal I/O wait. Disk activity should not be cpu intensitive. Reiser4 might win on a faster processor, but any filesystem that takes more than a few % cpu smells of possibly poor scalibility, which might pin the CPU to 100% on certain loads or configurations even on a modern system. Maybe there's a bunch of O(n) list operations going on in there that could be made O(log n) or O(1), or maybe it's doing a lot
        • It depends on the purpose of the machine. If the FS is utilizing CPU and RAM to build an L1 Cache, align writes, do simple defragmentation during idle, etc, it might chew up quite a bit more CPU than a more conventional FS, which would be really bad if your using the machine as a PHP server or something... but if you are running the machine as a simple file server, say a remote /usr partion for your network, as you mention, a conventional file system will use at most only 5% of the proc, which means 95% of
        • Reiser's stated goal is to increase throughput through additional CPU usage. If you're not using the CPU anything, since, say, you're blocking for I/O to go through, it makes sense to expense some CPU if you can get I/O done faster.

          On the other hand, if you're running a database, yes, you need all the cpu you have. The question then becomes, how much I/O do you get per unit of CPU, but the situation is very complex; its not necessarily some easily reducable linear system. It could be that you get, for ex
      • by cecom (698048)

        While I basically agree with you, 500MHz is not four times slower than 2 GHz. However in this case it is probably worse, since a 500MHz PIII implies a slow 100MHz side bus, slow 33MHz PCI bus, slow PC100 memory. A terrible system for doing benchmarks in 2006! It is completely unrepresentative of anything.

        Actually, I am getting angrier as I write this. It was just wrong to publish an article using such an outdated system. People worried about high FS performance are not going to be using anything like tha

      • If you want to use parts from 1997 to build a computer, Reiser is not for you.

        If you want to use your CPU for things other than handling the filesystem, Reiser is not for you. If you know that having enough RAM to hold currently used files, Reiser is not for you. If you want a filesystem that is good at quickly creating/deleting a lot of small files (compiling, etc: JFS), ReiserFS is not for you. If you want a good linear throughput (video processing: XFS), ReiserFS is not for you. If you want somethin
      • So what? Doing the same benchmark with faster CPUs will make ReiserFS look faster, but doing the same benchmark with a faster IO system will make all the others catch up again.

        It's bad enough when video games call you and idiot because you don't have last week's video card, but it's crossing the line with file system authors call you an idiot because you don't have last week's CPU.

        If the CPU ain't broke, don't throw it away!
    • JFS ... (Score:3, Interesting)

      by Pegasus (13291)
      Of course JFS won, since it was designed to be as simple as possible ... it's originating from OS/2, afterall. On such a machine as used in this test, this is a huge advantage.
  • by j0ebaker (304465) <> on Friday January 06, 2006 @02:20PM (#14410491) Homepage Journal
    It would be interesting to see the results of the same tests running against a SCSI drive system where there is less IO overhead to see if the results differ.
    There are other considerations here as well. What about the I/O elevator's tuning options.
    Yes, I'd much rather see this test occur against a SCSI drive or better yet against a RAM drive for pure software performance.

    Cheers fellow slashdoters!
    -Joe Baker
    • The IO scheduler should not matter as they are only important when multiple processes access the disk.
  • Outdated hardware... (Score:3, Informative)

    by tetabiate (55848) on Friday January 06, 2006 @02:39PM (#14410666)
    Anyway, how is the average user supposed to be concerned by these results?
    In my daily work I manage hundreds of GB's of data and have hardly seen a significative difference between XFS, JFS and ReiserFS v.3 on relatively modern hardware (Tyan S2882 Pro motherboard, two Opteron 244 processors, 4 GB RAM and two 250-GB SATA HD's) running OpenSuSE 10. I put the most important data on a XFS partition but also have a small ReiserFS partition which can be read from Windows.

    -- Help us to save our cousins the great apes, do not use cell phones.
  • by Anonymous Coward
    The total free space graph is poor statistical representation

    It starts at 345GB and goes to 375GB on the y scale. This makes the difference between 355 and 370 look like a 50% difference rather than that 5.7% increase.

    He does it again in make 10,000 directories 99.5% is not double the cpu use of 97%
  • by strredwolf (532) on Friday January 06, 2006 @02:56PM (#14410806) Homepage Journal
    You know, I was looking at all these stats from this roundup... and while I'm glad they have one nice stat (how much the FS itself takes, the rest for space), I'm not happy that there is no "We've loaded it up, lets see how much is left" statistic.

    What am I saying? I want to know how efficent these filesystems are in packing the data on the HD.

    • I know Reiser v3 has "tail packing" to take small files and ends of files that stick out past a block boundary, and packing them inside "sub-blocks" to save space. ext2/3 is stuck at the block boundary (even though you can adjust the size of these blocks)
    • I don't know if ext2/3 has been enhanced to pack small files in inode data.
    • JFS and XFS does not have a tail-packing feature, and is too stuck at (adjustable) block boundaries.

    I'm glad that you get more data out of Reiser v4, JFS, and XFS at formatting time, but my feeling is that Reiser v4 (once profiled, tweaked and refined for speed and space) will pack data tighter than anyone else. Meanwhile, I'm looking for something like ext3 that packs better.
  • What's with the Microsoft Excel style graphs? They're not very precise or professional-looking.
    You would have thought the author would use something better like gnuplot?

    The author's opinion "Personally, I still choose XFS for filesystem performance and scalability."
    is largely irrelevant here and sounds like bias, although the author acknowledges this.

    There is no discussion of the results. The text between the graphs only mentions superficially
    what is obvious to anyone looking at the graphs.

    Seems a far cry f
  • by hansreiser (6963) on Friday January 06, 2006 @03:40PM (#14411159) Homepage
    If someone does not know that filesystem benchmarks that take less than a tenth of a second are meaningless, it makes you wonder if they made errors in other aspects as well. These results are not consistent with the results that we have had. I bet he did not make an effort to ensure that you had to read the disk for these benchmarks, that he did not copy his file set from the same fs as he was measuring (makes a HUGE difference to performance and it is the mistake every beginner makes), etc. You'll note that the way he makes his graphs makes 1% differences look huge, etc.
    • If I exclude the "less than 1 second" benchmarks, the reiser filesystems are a little better off, but still in last place. If I additionally exclude the "Remove 10,000 Directories" benchmark, reiser 4 and 3 move up to 2nd and 3rd place, and EXT2 and 3 move into last. JFS seems to win 1st no matter how I work the numbers.
  • XFS - UPS = Disaster (Score:3, Interesting)

    by fire-eyes (522894) on Friday January 06, 2006 @03:41PM (#14411175) Homepage
    XFS is a nice filesystem, I like it. Not enough to use in production, but I like it. Personally I use reiserfs3.6 on many production servers, and have never seen a problem. I am experimenting with 4 at home.

    I have a strong warning if you are considering XFS. If you don't have a GOOD power backup (UPS), then don't use it. XFS caches very agressively for writes in RAM. You lose power, you lose that data.

    XFS was designed with datacenters with good power backups in place, not home users. So chose carefully.
    • by ananke (8417)
      Recently I've been doing some benchmarks to test iSCSI initiators on linux. So far [until 2.6.15], XFS is the only filesystem that got damaged after some kernel panics. On 2.6.15 I've damaged JFS almost everytime I got a kernel panic, very frigthening.

      Anyway, for anybody interested, the results are at: []
  • by LordMyren (15499) on Friday January 06, 2006 @04:09PM (#14411421) Homepage
    <blink> Test is flawed! </blink>

    Checkout the CPU utilizations; reiserfs is pegged at 100% cpu utilization for ~8 tests. For a FS which describes itself as willing to use more CPU in order to achieve better I/O than the competition, running the benches on an antiquated 700 mhz machine is simply not fair.

    OTOH, Untarring and tarring are notably NOT cpu limited, and still pretty lackluster for Reisers case. Disappointing, very disappointing. I was extremely impressed in the ext's; I simply had no idea how consistently well performing they were.

    I'd also like to see FreeBSD's UFS /w and w/o softupdate benched.

  • Wow. They must have a super slow file system if they are just now getting the results of the tests and the first half of the article ran in the first half of last year! :) This is supposed to be funny by the way.

Little known fact about Middle Earth: The Hobbits had a very sophisticated computer network! It was a Tolkien Ring...