Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

Benchmarking Linux Filesystems Part II 255

Anonymous Coward writes "Linux Gazette has a new filesystem benchmarking article, this time using the 2.6 kernel and showing ReiserFS v4. The second round of benchmarks include both the metrics from the first filesystem benchmark and the second in two matrices." From the article: "Instead of a Western Digital 250GB and Promise ATA/100 controller, I am now using a Seagate 400GB and Maxtor ATA/133 Promise controller. The physical machine remains the same, there is an additional 664MB of swap and I am now running Debian Etch. In the previous article, I was running Slackware 9.1 with custom compiled filesystem utilities. I've added a small section in the beginning that shows the filesystem creation and mount time, I've also added a graph showing these new benchmarks." We reported on the original benchmarks in the first half of last year.
This discussion has been archived. No new comments can be posted.

Benchmarking Linux Filesystems Part II

Comments Filter:
  • by CastrTroy ( 595695 ) on Friday January 06, 2006 @01:41PM (#14410172)
    I'd like to see how they perform on a 12 GB Disk on a P2 266. You really start to see the differences when working on older hardware.
  • by Conor Turton ( 639827 ) on Friday January 06, 2006 @01:42PM (#14410190)
    One thing this does show is that you need to be very careful to match the filesystem type to the main tasks the PC is going to be used for. Personally, there's no real clear winner as all have major gains or deficiencies in some areas. One very interesting point was the vast difference in the amount of available space after a partition and format between the different filesystems.
  • by Clover_Kicker ( 20761 ) <clover_kicker@yahoo.com> on Friday January 06, 2006 @02:00PM (#14410329)
    I love the CPU utilization graph for "touch 10,000 files".

    A quick glance shows ReiserV4 as much more CPU intensive, you have to look at the scale to realize it only used 0.3% more CPU.

  • somewhat worthless (Score:5, Insightful)

    by aachrisg ( 899192 ) on Friday January 06, 2006 @02:06PM (#14410375)
    His benchmark data is ruined by using a gross unrealtistic piece of hardware - modern fast hard disks coupled with a cpu which is absurdly slower than anything you can buy.
  • Sample size (Score:2, Insightful)

    by rongage ( 237813 ) on Friday January 06, 2006 @02:13PM (#14410423)

    Am I reading this "benchmark" correctly? Did he base his results on a sample size of 1?

    At the very least, you run multiple times and average the results to give statistically meaningful numbers. I can't think of ANY time where a sample size of 1 was meaningful for anything.

    What would be really interesting is to come up with a reasonable UCL and LCL for each test, and then calculate out a cpK for each test. It's one thing to say "I got these results one time", it's something much more impressive to say "I can achieve this result +-10%".

    Of course, if a particular benchmark can't even hit a cpK of 1, then maybe there is room for improvement in the coding of the driver.

    For those of you who haven't done much with statistics, cpK is a measure of "capability" in a machine or process. It shows how repeatable the measured process is. A higher number indicates that you have a highly targeted, low deviation process whereas a low number (1 or less) indicates that your process is incapable of repeatability and/or accuracy.

  • by bhirsch ( 785803 ) on Friday January 06, 2006 @02:15PM (#14410437) Homepage
    There were some current (recent 2.6 kernel with XFS, JFS, possibly Reiser4, etc) benchmarks done on highend servers (or at least something with drives a few steps up from the CompUSA weekly special), especially if anyone wants to see Linux succeed in the enterprise.
  • by Anonymous Coward on Friday January 06, 2006 @02:17PM (#14410464)
    Everyone knows Reiser4 uses a lot of CPU, and these guys run the test on a 500MHz machine!!
  • by Raphael ( 18701 ) on Friday January 06, 2006 @02:19PM (#14410476) Homepage Journal
    One very interesting point was the vast difference in the amount of available space after a partition and format between the different filesystems.

    Unfortunately, that graph is rather misleading. The ext2 and ext3 filesystems keep some percentage of the disk space as "reserved" and only root can write to this reserved area. This is useful if the disk contains /var or other directories containing log files, mail queues and other stuff. Even if a normal user has filled the disk to 100%, it is still possible for some processes owned by root to store some files until an administrator can fix the problem. On the other hand, if your filesystem contains only /home or other directories in which users are not competing for disk space with processes owned by root, then it does not make much sense to have a lot of disk space reserved for root. That is why you should think about how the filesystem is going to be used when you create it, and set the amount of reserved space accordingly.

    The default behavior for both ext2 and ext3 is to reserve 5% of the disk space for root. You can see it in the section Creating the Filesystems from the article:

    4883860 blocks (5.00%) reserved for the super user
    You can change this behavior with the -m option, specifying the percentage of the disk space that is reserved. The article did not mention how the filesystem was supposed to be used if it had been used in production. However, I would guess that the option -m 0 or maybe -m 1 could have been used in this case. This would have provided a fair comparison and suddenly you would have seen all filesystems in the same range (close to 373GB available), except maybe for Reiser3.
  • by j0ebaker ( 304465 ) <joebaker@dcresearch.com> on Friday January 06, 2006 @02:20PM (#14410491) Homepage Journal
    It would be interesting to see the results of the same tests running against a SCSI drive system where there is less IO overhead to see if the results differ.
    There are other considerations here as well. What about the I/O elevator's tuning options.
    Yes, I'd much rather see this test occur against a SCSI drive or better yet against a RAM drive for pure software performance.

    Cheers fellow slashdoters!
    -Joe Baker
  • Re:I would agree (Score:5, Insightful)

    by Anonymous Coward on Friday January 06, 2006 @02:26PM (#14410546)
    Ext2/Ext3: Mediocre at almost everything. Distros like Fedora that mandate the initial install ONLY use Ext3 are being stupid. The best fall-back filing systems if you can't find anything better for what you want the partition to do, but should never be used in specialized contexts.

    Huh? Sorry, did you read the same graphs or are you just trolling?

    This article shows that ext2 and ext3 are close to the top performer in most tests and do not have many "worst-case scenarios" (unlike, e.g. Reiser3 and Reiser4).

    If there is anything that you can conclude after reading this study, it is that ext3 is a reasonably good default choice for a filesystem.

  • by Clover_Kicker ( 20761 ) <clover_kicker@yahoo.com> on Friday January 06, 2006 @02:35PM (#14410632)
    > If all you are doing is using samba or netatalk to serve files
    > even 500mhz is overkill.

    Not for ReiserV4 :)

    Seriously though, there's nothing wrong with designing a new filesystem to take advantage of modern CPU horsepower as long as everyone understands the system requirements.
  • by phoenix.bam! ( 642635 ) on Friday January 06, 2006 @02:44PM (#14410702)
    Reiser uses much more CPU for file system tasks. ReiserFS is a modern filesystem meant to run on modern machines. This machine is only 500mhz and therefore Reiser performs poorly. Had this machine been a 2ghz (standard now, 4x faster than the test machine), or even a 1ghz (Outdated and 2x as fast) machine Resier would have performed much better.

    If you want to use parts from 1997 to build a computer, Reiser is not for you. 500mhz is at least 8 year old technology if I remember correctly.
  • Re:Warning (Score:4, Insightful)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday January 06, 2006 @03:01PM (#14410839) Homepage Journal
    XFS does things that ext? and Reiser can't do. Reiser does things other FSes don't do as well. It's a true 64-bit filesystem and it supports insanely large filesystems, up to 9 million terabytes in 64 bit mode (with a 64 bit kernel.) It even provides realtime support, although I guess that's still beta in linux? It can be defragged and even dumped while live. It has insanely quick crash recovery. And of course, it does other stuff too; check the project page [sgi.com]. XFS may not be the fastest filesystem - it may even be the slowest - but it's got features no other filesystem has. If you need them, XFS is the winner. Hell, if you just trust XFS more than you trust other filesystems, it's the winner. (Sorry, but I wasn't sleeping when reiser was eating everyone's data, and ext3 handles corruption much more poorly than any of the other Journaled options.)
  • Re:I would agree (Score:3, Insightful)

    by smoker2 ( 750216 ) on Friday January 06, 2006 @03:03PM (#14410869) Homepage Journal
    Ext2/Ext3: Mediocre at almost everything. Distros like Fedora that mandate the initial install ONLY use Ext3 are being stupid. The best fall-back filing systems if you can't find anything better for what you want the partition to do, but should never be used in specialized contexts.
    How the hell did you come up with that opinion ?

    Ext3 came 1st or 2nd in 24 out of the 40 tests done. If you were producing an OS for general purpose computing, would you use a specialist fs or the best performing general purpose one ?

    You seem to have good words for JFS and XFS though, and XFS had only 13 1st or 2nd places !

    How do you work out that Ext3 is "mediocre" from those figures ?

    (you sound like you run debian)

  • by Anonymous Coward on Friday January 06, 2006 @03:18PM (#14410999)
    Don't use that software garbage excuse of "there's more cpu lets use it always cause we can".

    That's why stock dell's and HP's are so much god damn slower than a much worse specced machine.

    If that's the concept for reiser, I can only guess a large portion of the linux population is retarded.
  • by Westley ( 99238 ) on Friday January 06, 2006 @03:27PM (#14411072) Homepage
    It's one thing to say "Let's use more CPU because we can."

    It's another to say "Let's use more CPU (which is usually relatively idle) in order to improve the normal bottleneck, which is IO."

    I don't see what's wrong with that at all. Of course, it's no good if you've got a machine which doesn't represent the "normal" current situation, any more than using a graphics card for "acceleration" makes sense if the graphics card in question is 10 years old but you're using a fast new CPU.

    Jon
  • by hansreiser ( 6963 ) on Friday January 06, 2006 @03:40PM (#14411159) Homepage
    If someone does not know that filesystem benchmarks that take less than a tenth of a second are meaningless, it makes you wonder if they made errors in other aspects as well. These results are not consistent with the results that we have had. I bet he did not make an effort to ensure that you had to read the disk for these benchmarks, that he did not copy his file set from the same fs as he was measuring (makes a HUGE difference to performance and it is the mistake every beginner makes), etc. You'll note that the way he makes his graphs makes 1% differences look huge, etc.
  • Re:I would agree (Score:3, Insightful)

    by diegocgteleline.es ( 653730 ) on Friday January 06, 2006 @03:49PM (#14411252)
    Distros like Fedora that mandate the initial install ONLY use Ext3 are being stupid

    It's amazing that such commentaries are moderated interesting these days. So, uh, fedora developers are stupid and you're smarter than them?. Please take a look at this [slashdot.org] commentary to understand why such decisions aren't so simple. You can tune your car's engine and it'll be faster, right? But why not everybody tunes their engines?

    Let me quote a ext3 paper: "The ext2 and ext3 filesystems on Linux are used by a very large number of users. This is due to its reputation of dependability, robustness, backwards and forwards compatibility, rather than that of being the state of the art in filesystem technology."
  • by hackstraw ( 262471 ) * on Friday January 06, 2006 @04:06PM (#14411400)
    I would rather see these benchmarks on a computer less than 5 years old. I would also appreciate an open source version of the tests so they could be reproduced. For ease of reading, I think the article should be on a separate page on the site as well.

    I've got a screaming Dell 1.6 GHz P4 to test with and here are my results for a couple of tests it only has ext3 and a whatever cheap harddrive came with the box. I'm not sure if dma is enabled or if I've done any hdparam tunings, but I'm not sure of their test system either:

    my touch 10,000 files: 24.314 seconds theirs 48.25

    I used a shell script that called /usr/bin/touch

    Now if I use a Perl open() call, I get 8.887 seconds
    Now with a cheesy C that uses fopen() and fclose() I get 4.639 seconds

    my make 10,000 directories: 56.832 seconds theirs 49.87

    that is a shell script

    If I user perl, I get 35.171 seconds

    The /dev/zero stuff is completely bogus. No indication of the blocksize that was used.

    The copy kernel stuff to and from a different slower disk with an unknown filesystem on it is useless.

    The split tests are not indicative of anything in real life, and they took on order of between 60 seconds and 130 seconds to perform on their 500MHz system with most being in the 130 second range. I got 16.547 seconds.

    I do not see how any relevant information can be obtained from this article. I'm disappointed in the Linux Gazette and Slashdot for printing this information.

  • by LordMyren ( 15499 ) on Friday January 06, 2006 @04:09PM (#14411421) Homepage
    <blink> Test is flawed! </blink>

    Checkout the CPU utilizations; reiserfs is pegged at 100% cpu utilization for ~8 tests. For a FS which describes itself as willing to use more CPU in order to achieve better I/O than the competition, running the benches on an antiquated 700 mhz machine is simply not fair.

    OTOH, Untarring and tarring are notably NOT cpu limited, and still pretty lackluster for Reisers case. Disappointing, very disappointing. I was extremely impressed in the ext's; I simply had no idea how consistently well performing they were.

    I'd also like to see FreeBSD's UFS /w and w/o softupdate benched.

    Myren
  • by captain_craptacular ( 580116 ) on Friday January 06, 2006 @04:12PM (#14411446)
    So this benchmark on a 500Mhz machine will of course show Reiser in a bad light, and moving lower down to a 266Mhz will make it even worse.

    If you look at the charts, the "editing" doesn't help either. For example one cpu usage chart showed a range starting @ 92% and ending @ 94%. The Rieser4 bar was 3x as long as the next bar, but guess what, it was using something like .7% (ie 93.7% as opposed to 93%) more CPU. If the scale hadn't been jacked up you wouldn't have been able to spot the difference at all, but they way they chose to present the data, it looked like a total smackdown.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...