Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Linux Software

New Ext3 vs ReiserFS benchmarks 191

An anonymous reader writes "Saw this new benchmark on the linux-kernel mailing list. Although NAMESYS, the developers of ReiserFS has many benchmarks on their site, they only have one Ext3 benchmark. The new benchmark tests Ext3 in ordered and writeback mode versus ReiserFS with and without the notail mount option. Better than expected results for Ext3. Big difference between ordered and writeback modes."
This discussion has been archived. No new comments can be posted.

New Ext3 vs ReiserFS benchmarks

Comments Filter:
  • I think I know what writeback is (like with cache?), but can anyone explain ordered mode?
  • Writeback kicking it (Score:3, Informative)

    by jred ( 111898 ) on Friday July 12, 2002 @05:58PM (#3873820) Homepage
    Writeback kicks ordered's ass. They do warn you about it, though:
    However, it is clear that IF your server is stable and not prone to crashing, and/or you have the write cache on your hard drives battery backed, you should strongly consider using the writeback journaling mode of Ext3 versus ordered.
    I didn't see where "notail" made much of a difference on ReiserFS, though.
    • FWIW, the issue of whether writing to volatile storage counts as a committed transaction has been kicking around for a long time.

      I remember in the mid-80s, Stratus and Tandem would duel over TPC benchmarks, and while Stratus did respectably on conventional disk-based writes, they did try to get the TPC council to allow writes to their resilient (duplicated), battery-backed memory to count too. I don't think they succeeded then, and IMHO some rather cruddy PC memory system should not be allowed to count now.
      • If you want to be sure that the data is on disk, use fsync().
        • by Anonymous Coward
          Nope. That's the problem. fsync() guarantees that the disk controller hardware is synced with the OS. It does not guarantee that the disk platters hold the data. It probably should, but implementing that is not always possible. Many controllers lie to look faster.
    • by Zwack ( 27039 )

      But if you look at the NAMESYS benchmark comparing ext2 to ext3 and ResierFS then it is clear that for sheer throughput ext2 wins...

      IF Speed is your reason for choosing a Filesystem then writeback wins on almost everything in these examples...

      But using a Journaled Filesystem isn't usually done for Speed... Unless you count speed booting after a crash. It's done to (more or less) guarantee filesystem integrity after a crash. You may lose data, but you only lose writes that never completed.

      So, if you are choosing ext3 with writeback, is it faster than native ext2? I don't know. But it doesn't sound like it is any safer.

      Of course, if you're worried about data integrity, you will have a mirror across multiple striped drives using multiple controllers. And then use a Journaled Filesystem to improve boot time.


      • by cduffy ( 652 )
        ext3 with writeback is indeed safer than ext2, inasmuch that all corruption will be with regard to the data -- your metadata is still safe.

        Now, data corruption can be a Very Bad Thing, depending on what you're doing... but in many cases, preventing metadata corruption (and thus being sure that your filesystem is always usable) is Good Enough.
      • Of course, if you're worried about data integrity, you will have a mirror across multiple striped drives using multiple controllers. And then use a Journaled Filesystem to improve boot time.

        This is a misinformed opinion, at best. Your RAID setup will only save data in the case of hardware failure (i.e. one of your disks fails). It will do nothing about incomplete writes. The whole purpose of journaled filesystems is to ensure that writes completed, to minimize filesystem corruption. It just so happens that the way it does this allows for a faster boot, which is an added bonus.

    • or you have the write cache on your hard drives battery backed

      I've seen such an option on big external RAID arrays. Makes sense, lets the write cache be written to disk before the power goes out.

      I'm curious, though, do any hard drives have this feature? Maybe not a full battery, but perhaps a capacitor to store enough juice to write that 8 MB of cache data down to disk before it's gone for good? Or perhaps some sort of bolt-on option for existing internal drives?

      I ask this as I'm an average joe with home-brew and cheap-label servers (I built most, a few are PII Dells and Gateways). My machines are pretty stable, but I only have about 70 minutes of battery backup from my UPS... and there's no way I could justify buying a generator.
      • Why bother? If you have a UPS, all you need to do is let it alert your servers to the loss of external power and the servers can begin a clean shutdown sequence, certainly well within your 70 minute range. Most APC UPSes that I know of have a serial cable hookup. If you have more than one server hooked up to one UPS, I'm sure you could devise someway of one server recieving the power-down signal and broadcasting it to all your other machines over the network.
        • by supz ( 77173 )
          No need to devise a way of sending out a power-down signal for those with APC UPSes. They have a product named PowerChute [] (and even a linux version!) that machines connected to a UPS can use to communicate to each other. It has configurable shutdown times, so mission critical servers can stay up for the longest time possible, while not so important ones can be shut down immediately. We use it extensively in my office, and it really lengthens the battery length on our UPS.

          Also worth nothing -- we have our Exchange server begin shutdown almost immediately after the power goes out, as it takes exchange nearly 15 minutes just to shut down. We are actively looking for an alternative to Exchange.
          • by Sabalon ( 1684 )
            There is also an apcupsd. This way, you can have one machine that is hooked to the UPS (no need for additional hardware to let multiple machine monitor the UPS.) When power goes down, the apcupsd then lets the other servers know what is going on (power off, power on, shut down now, etc...) Ports to Unicies galore, and winders.

            This all assumes that you have the network on a UPS and with the power out all machines can still talk.

            Pretty nice tool with tons of options. (oddly, with the exception of the what's new pages of the docs, the url isn't listed in the docs.)

            Of course, I like my option - buy a UPS with enough capacity to hold the whole room for about 30 minutes (40KW) and a big ole generator in case things go down for a while.
            • I second the vote for using apcupsd. However, I think it is important for me to relay my experiences with it just to avoid potential problems for those of you uptime zealots (like me).

              A few months ago, I had a short (2 minute) power outage and of course my UPS kicked in and my server stayed online as you might expect. However, when power was restored, the apcupsd scripts were (by default) configured to reboot the server after a return to utility power. Why this is the case, I cannot answer, however I'm sure there is a logical explanation. In my case, I found this very unsettling as it caused my 100+ days of uptime to return to zero whence they came. The scripts were easy to fix, but hopefully this will serve as a warning for those of you who cannot afford the restart.

              On a slightly different note, I'm still not understanding the whole journalling file system issue; I understand the benefits, but are you really crashing that much (which must be hard locks), that you need to do a hard reset and let the journal replay the transactions? Personally, I have a tape backup, and a UPS. Do I really need a journalling file system, other than the obvious advantage of impressing the ladies? At the moment, I'm interested in XFS because of the ACLs and the "intensive disk usage" features SGI has in the IRIX version, and I'm hoping those make it into the "final" Linux version (if there ever will be a "final" version).
              • I'm curious, how can a script (software) reboot a a server that has already halted?
                • How can a script (software) reboot a a server that has already halted?

                  The system wasn't halted. The UPS kicked in and ran on batteries for a couple minutes then switched back to mains. The server remained up and running. The apcupsd daemon was set to run a script when hen the utility power returned, and the script was configured to be "shutdown -r now"

                  At no point during the process was the system halted.
              • well...the simple reason for a better file system is simply shit happens

                On one of our old systems, the network admin asked what a button did as he pushed it. It was the power button. At another time the same guy accidentally dropped a pencil that hit the same power button (actually a rocker switch) again. Someone else was curious as to what the inside of that machine looked like, so they opened the swinging back door of the case, which caused the system to power down (oh that poor TI 1500)

                Power cords get tripped over. UPS's fail. UPS software does odd things. Hardware fails.

       have backups. Those fail as well, and restores take time. A journaling file system takes a few seconds after an abnormal startup to fix itself.

                Just think of it as yet another layer of protection beyond the UPS and backup tapes. And of course it helps get the ladies :)
          • Have you considered Samsung Contact [] (formerly HP Openmail)? As far as Exchange replacements it should be a viable alternative. Runs on Solaris, Linux, HP-UX or AIX on the server side and supports pretty much everything Exchange does on the client side (and of course it supports most other email clients).

            Of course, if you dont need a feature for feature match with Exchange there are unlimited cheap alternatives for mail servers.
        • What if the reason for the power failure is that someone tripped over the cord running from the UPS to the PC & pulled it out, or if the Power supply in the PC failed? How about if you were in the computer room & saw smoke & fire pouring out of the server? How about if the UPS failed?

          There are cases where a UPS won't prevent an "unexpected downtime". In these cases, it might be helpful if the drives were able to finish their last write on their own power. It might give you something to boot after you correct the problem.
  • I know that I'm stupid for saying this, but after the past few years, a benchmark isn't sexy unless it has scenes of flying dragons or a copied scene from the Matrix on the screen. I must have sold my soul to the devil for saying that.
  • by Anonymous Coward on Friday July 12, 2002 @06:04PM (#3873861)
    If you want journeled ext3 data vs, reiserfs with tails and without tails check out:

    There are some decent benchmarks there that compare the two as well as extensive NFS tests.
  • ReiserFS loses data (Score:1, Informative)

    by Flarners ( 458839 )
    A hash collision in a ReiserFS directory (where two filenames hash out to the same value) causes the older file to BE OVERWRITTEN without so much as a warning. This is a huge design error, and I can't believe they're pushing Reiser as a production-use filesystem. The only way to ensure you never lose data to hash collisions is to use the 'slowest' hash setting; the faster the hash function, the more likely it is to create collisions and leak data. I had a large project lost to a
    • Slashdot cut off my comment! Anyway, you get the idea; don't use ReiserFS unless you don't mind occasionally having files disappear.
    • by delta407 ( 518868 ) <slashdot.lerfjhax@com> on Friday July 12, 2002 @06:10PM (#3873914) Homepage
      You're on crack. Hash collisions incur only a performance hit, not lost data.
      • Tell that to my missing /usr/local tree.
        • Re:Interesting (Score:2, Insightful)

          by Anonymous Coward
          My car is missing. Therefore, UFOs from the center of the earth took it. Bigfoot was involved.

        • Can your /usr/local/ tree be made to go away reproducibly?

          To prove you theory you could take the hash function in reiserfs and replace it with a function that always returns '1'. You would probably have to reformat your partitions though for that test though. The filesystem should still work. If it doesn't that's a bug.

          The chances of their being a bug in reiserfs is about 100%. Same is true of ext3 though.

    • SuSE have been pushing ReiserFS for some time. I've certainly been using it for what seems like ages with no noticeable problems.

      I'm 110% sure it's saved more files when I've lost power or when something's hung requiring a hard reset than it'd deleted due to hash clashes. What's the likelihood of two files generating the same hash? You talk of increasing likeliness, but don't mention any figures. It's hard to judge without some stats.

      As an aside, why didn't you restore your large project from your backup? What do you mean you didn't have...

    • by RockyMountain ( 12635 ) on Friday July 12, 2002 @06:26PM (#3874008) Homepage
      Can you document the claim that hash collisions cause silent data corruption? Or even that they cause a failure of any sort?

      If this is true, surely it must be documented somewhere, or have been discussed in a credible forum? I did a little searching, and didn't find anything. Please post a URL to elevate your comment from unsubstantiated rumor to informative information.

      In most hash-based indexing algorithms I know of, hash collisions incur a perfomance penalty, but not a data loss.
    • I don't know how accurate this is because its a bit beyond my technical knowledge. However I know that following a hash collision while using RFS, my /usr/local directory vanished. So there is some truth to the parent post.
    • by gregor_b_dramkin ( 137110 ) on Friday July 12, 2002 @07:00PM (#3874197) Homepage
      ... that's why you lost your data. It annoys me to no end when people assume a cause for a problem and begin to state it as fact without verification or fact.

      Is it possible that there is a bug in reiserfs? Sure. I just don't trust anecdotal evidence from some dood on /.
    • A hash collision in a ReiserFS directory (where two filenames hash out to the same value) causes the older file to BE OVERWRITTEN without so much as a warning.

      This is not necessarily a bug if the probability of that happening in real world scenarios is negligible. After all, you risk data loss from many sources.

      Unfortunately, programmers often seem a bit unreasonable about probabilities. They complain about a (say) 1:10^20 chance of losing a file, while at the same time writing the whole file system in C, which basically guarantees a several-fold increase in the probability of undetected software faults compared to alternatives. In fact, the fix for such a remote possibility may not only kill performance, it may actually increase the overall probability of a fault that causes data loss--because the extra code may have bugs.

      So, no, this doesn't bother me. I suspect that if Reiser knows about it and he isn't fixing it, he probably thought about it and decided the probability is too remote. If you disagree, I would like to see a more detailed analysis from you.

    • by Morgaine ( 4316 )
      Let's be scientific about this.

      Provide at least one pair of filepaths which generate a hash collision under whatever scenario you care to specify, so that others can test and verify the resulting effect, even if it's probabilistic and requires billions of reruns to trigger -- no problem.

      If the effect isn't seen by anyone else under any conditions, then the problem doesn't exist. Conversely, if it does happen under some repeatable conditions (even if only extremely rarely) then it *is* a problem, and will be fixed.

      If you want to be constructive about it, take this issue out of mythology and onto firmer ground.
  • One thing in these benchmarks surprised me just a bit:
    that reiser would do so well on the heavy-throughput/large file test.

    I've been laboring under the perception that reiser was good for randomly accessing small files, but paid a performance penalty when going after large ones.

    Guess I'm still waiting to prove that no one can be wrong about everything! ;0)
  • my decision (Score:5, Insightful)

    by salmo ( 224137 ) <`mikesalmo' `at' `'> on Friday July 12, 2002 @06:11PM (#3873918) Homepage Journal
    My decision isn't based on performance. They both are "fast enough" for me. I used to use ReiserFS a while back and it was great. Then I installed Redhat 7.3 on a machine and used ext3 so I didn't have to mess with anything. Yes tinkering is fun... but when I feel like it. Sometimes its nice to have stuff Just Work. Haven't had any problems since and have had a few random power outages.

    Also I like the idea that I can read the drive with an ext2 driver from an older kernel or from FreeBSD just in case. In case of what? I don't know, but somehow it makes me feel better.
    • Also I like the idea that I can read the drive with an ext2 driver from an older kernel or from FreeBSD just in case. In case of what? I don't know, but somehow it makes me feel better.

      ...How about in case you want to make a disk image with a tool like DriveImage [] that supports ext2, and therefore, in a round-about way, ext3?
      Hard disk crash? no problem -- drop in a new drive and the cd with your partition image and you're up in 15 minutes.
      Note: I'm not affliated with PowerQuest -- I just buy their software when I've got money left over from buying a book of the new 37 cent US first class stamps...
    • Re:my decision (Score:2, Interesting)

      by big tex ( 15917 )
      "Just Works", at least in this case, is partially dependent on distro.
      I run SuSE, and installed ReiserFS (version 7.1? 7.2? Sometime around there.) and it "Just Works."
      I don't know if it is faster, I've never noticed the difference on my P2-400 home machine.
      Got to test it out the other day when the cat sat on the surge protector switch - rebooted like nothing happened. sweeeet.

  • Offtopic, but seems to me that the picture that gurulabs is using as background for their web page is ripped from the cover artwork of the album "Rally of Love" by the Finnish band 22-Pistepirkko. Wonder if they have permission for that?

    Of course, could be that the album cover is a copy of something that is in the public domain...

  • so what's the point of running ext3 in writeback if (as the faq says) it's exactly equivalent to ext2 "with a very fast fsck"? So is the _only_ gain the fsck time?
    • Yes.
      But for some people, that appears to be enough...
    • so what's the point of running ext3 in writeback if (as the faq says) it's exactly equivalent to ext2 "with a very fast fsck"?

      Consider a large tmp volume.

      Anywhere where the consequences of finding stale data in a file are no worse than having the data simply missing after a crash. Even a src directory if you do a lot of big makes (since you're best off with make clean ; make after a crash anyway). Just be sure to sync after writing out a source file.

      However, as long as performance is adequate, probably better safe than sorry when it comes to filesystems.

    • so what's the point of running ext3 in writeback if (as the faq says) it's exactly equivalent to ext2 "with a very fast fsck"? So is the _only_ gain the fsck time?

      Well, ext3 with data=writeback is equivalent to how reiserfs has always operated (i.e. if you crash you can lose data in files that were being written to). Using data=ordered is an extra benefit that doesn't have any noticable performance hit unless you are trashing the disk and RAM in a benchmark. FYI, there are now beta patches for reiserfs that implement data=ordered.

      Only the fsck time can be a big deal if you have to wait 8 hours while your 1TB storage array is fscking (8 hours is a guess, I don't have that much disk...)

    • So what's the point of running ext2 if it's exactly equivalent to ext3/writeback but with very slow fsck?
      • Because the ext2 code is more mature than the ext3 code. I also read that the ext2 code is currently much better suited to SMP, but ext3 hasn't been worked over to work well with multiple processes/processors.
    • It's a bit better: redoing transactions in the journal will never fail if the hard disk hardware is intact. Fsck can get f*cked up, and by then all your data is, well, up to manual recovery.
  • I would have wanted to also see a non-journalling filesystem compared against these. Since I'm not currently using a journalled filesystem, it would be nice to see the difference between what I use now (ext2) and the journalled fs's.
  • ...of these guys. They saved the benchmark graphs as JPEG images when a passing glance would make the use of PNG or GIF.
    • .of these guys. They saved the benchmark graphs as JPEG images when a passing glance would make the use of PNG or GIF.

      GIFs are not user friendly in terms of licensing/patents. Lots of people simply refuse to use them for that reason. Especially lots of GNU people.

      PNGs are unsupported on a whole range of Netscape browsers.

      JPEGs are a lot closer to universal than GIFs or PNGs are.
      • The GIF Unisys patent has, IIRC, passed and is no longer an issue. Otherwise, all true. Motice, though: What image format is the Slashdot logo?
        • The GIF Unisys patent has, IIRC, passed and is no longer an issue. Otherwise, all true. Motice, though: What image format is the Slashdot logo?

          The GNU disagrees with you [].

          Also: Slashdot (the founders/owners/editors) is notorious for saying one thing and doing another. Witness the virulent anti-DMCA stance, yet, notice also how they support the very companies who forced it upon us (aka Sony). Witness their yammering about IE/MS not following standards when in fact their own HTML on thier own site is grossly out of established standards.

          So yeah, its in GIF - but it doesnt surprise me.
          • Also: Slashdot (the founders/owners/editors) is notorious for saying one thing and doing another. Witness the virulent anti-DMCA stance, yet, notice also how they support the very companies who forced it upon us (aka Sony). Witness their yammering about IE/MS not following standards when in fact their own HTML on thier own site is grossly out of established standards.

            Completely true. I've filed a bug to the slashdot bug report page in sourceforge to add some semantic tags to the ones we are allowed to use. I'd like to use , , etc. The bug was deleted as quick as it was posible, with no explanation.

            Besides, not only the HTML code doesn't validate. but also Slashdot has blocked [] the W3C validator!. That's very stupid, as anyone can just download and validate the page uploading it to the validator. Here is the validation result [].

            • So true man, so true.

              Back when they started subscriptions I emailed Taco and told him I was subscribing conditionally, and expected that they cleaned up their act - proof read submissions, valdiated links, proper HTML, etc.

              He responded that they were looking into all those things.

              I added two $5 blocks to my account, and then after that since none of the things I mentioned happened I am off subscriptions.

              It's too bad - with just a tiny bit more effort they could turn a popular nerd-friendly site into a popular, successful, respected, nerd-friendly news site.
  • XFS? (Score:3, Interesting)

    by Jennifer Ever ( 523473 ) on Friday July 12, 2002 @06:38PM (#3874078) Homepage
    Any benchmarks on XFS vs. ext3/ReiserFS?
    • i'm running XFS on a couple or three systems here at home w/ Linux From Scratch ( installs... and its very very nice. i remember seeing an article that was linked on linuxtodaycom a while back about XFS, i bleive the only downfall they said it has was its a bit slower that others when deleting files.

      i'd personally use XFS over any of the others any day, mainly since its fsck free and is a file system that is known to work well (its been used/owned by SGI, yea know).
  • I'm using soft updates on my BSD system.
    It's fast, stable, no fscking after a
    dirty reboot. Anyone know of benchmarks
    comparing this to ext3 or riser?
    • by Anonymous Coward
      If you are using soft updates and not running fsck after a dirty reboot, then you don't understand soft updates. You are also flirting with loss of data.

      Here is what you are missing. Soft updates is a method of ensuring that disk metadata is recoverably consistent without the normal speed penalty imposed by synchronous mounting. The only guarantee that softupdates makes is that your file system can be recovered to a consistent state by running fsck. Soft updates is designed to aid the running of fsck, but does not eliminate the need.

      Better get out your Palm add running fsck to your "to-do" list.

  • Why always Linux? (Score:2, Interesting)

    by evilviper ( 135110 )
    Why doesn't anyone compare UFS/FFS w/softupdates enabled to the Linux filesystems?

    Better yet, why did EXT get to be the defacto Linux filesystem, rather than UFS? It outperforms, and supports much large files/filesystems.

    A comparison of UFS from a platform other than FreeBSD might be in order.
    • > why did EXT get to be the defacto Linux filesystem, rather than UFS?

      My understading of the sitation is that it was because until softupdates were implemented UFS was painful. Now, had softupdates been implemented, say, 7-10 YEARS ago when EXT became the Linux de-faco filesystem there might have been a chance.

      On the flip side, seeing a good Linux implementation of a BSDish UFS with softupdates would be very nice.

      - RustyTaco
      • EVEN IF ext2 is considerably faster than UFS (which I doubt...) that wouldn't change the fact that it is much more stable (I've lost several ext2 fs's). That's besides the fact that UFS supports much large files and filesystems.
    • Just today I was working on getting some molecular dynamics code to work on a DEC PWS 500au. This code writes some large (3GB-500MB) files to the disk. On a fresh striped down (~400MB) install of RedHat 7.1 using ext2, bonnie showed throughputs of about 20MB/s for sequential read/writes of a 512 MB file.

      On a fresh install of FreeBSD 4.6 using UFS, bonnie reported more than 30 MB/s on the same machine.

      I know this isn't really what you were looking for but it surprised me that there was that much of a difference.
      • Re:Why always Linux? (Score:2, Interesting)

        by m.dillon ( 147925 )

        Just for the hell of it I ran the same benchmarks on one of my test boxes (FreeBSD running -current). The performance basically comes down to how much write latency you are willing to endure... the longer the latency, the better the benchmark results for the first two tests.

        So, for example, with the (conservative) system defaults I only got around 250 trans/sec for mixed creations with the first postmark test, because the system doesn't allow more then around 18MB of dirty buffers to build up before it starts forcing the data out, and also does not allow large sequential blocks of dirty data to sit around. When I bump up the allowance to 80MB and turn off full-block write_behind the trans rate went up to 2776/sec. I got similar characteristics for the second test as well. Unfortunately I have only one 7200 rpm hard drive on this box so I couldn't repeat the third test in any meaningful way (which is a measure mostly of disk bandwidth).

        In anycase, the point is clear, and the authors even mention it by suggesting that the ext3 write-back mode should only be used with NVRAM. Still, I don't think they realize that their RedHat box likely isn't even *writing* the data to the disk/NVRAM until it absolutely has to, so arbitrarily delaying writes for what is supposed to be a mail system is not a good evaluation of performance. Postmark does not fsync() any of the operations it tests whereas any real mail system worth its salt does, and even with three drives striped together this would put a big crimp on the reported numbers unless you have a whole lot of NVRAM in the RAID controller.

        I do not believe RedHat does the write-behind optimization that FreeBSD does. This optimization exists specifically to maximize sequential performance without blowing up system caches (vs just accumulating dirty buffers). But while this optimization is good in most production situations it also typically screws up non-sequential benchmark numbers by actually doing I/O to the drive when the benchmark results depend on I/O not having been done :-).

        Last thought. Note that the FreeBSD 4.6 release has a performance issue with non-truncated file overwrites (not appends, but the 'rewrite without truncation' type of operation). This was fixed post-release in -stable.


  • by SwellJoe ( 100612 ) on Friday July 12, 2002 @07:15PM (#3874269) Homepage
    Yes, folks, some filesystems are faster than others for some type of file.

    We benchmark ReiserFS versus all other Linux filesystems about once every 6 months or so, and the last one from about 3 months ago still places Reiser in the "significantly faster" category for our workloads, specifically web caching with Squid.

    ext3 is a nice filesystem, and I use it on my home machine and my laptop. But for some high performance environments, ReiserFS is still superior by a large margin. It is also worth mentioning that I could crash a machine running ext3 at will the last time we ran some Squid benchmarks (this was on 2.4.9-31 kernel RPM from Red Hat, so things have probably been fixed by now).

    All that said, I'll be giving ext3 vs. ReiserFS another run real soon now, since there does seem to be some serious performance and stability work going into ext3.

  • by HipPriest ( 4021 )
    I like reiserfs because I can trust it to perform well on any file system load. I can put it on a server and know it will be fast and efficient regardless of what the users do. Ext3 gives ext2 journaling, but does not add efficient large directories or small files, two features that reiserfs has.

    Sure ext3 may benchmark slightly faster in certain scenarios. But unless you know ahead of time that those are the only scenarios you are going to put on the file system, I recommend reiserfs.
  • I can't say much about ReiserFS. We use it on a server in one of the computer labs I admin at school, but that's the extent of my experience.

    But ext3.. I've been using it since the day RH7.3 was released, during which time I'll bet power to my machine has been cut at least 150 times (we had a bad circuit breaker that would randomly flip. I replaced it a few days ago). Often power was repeatedly lost many times in a short period of time (if that would matter), and in the middle of big disk write operations.

    Every single time I have been able to immediately reboot without any apparent data loss (except for the data being written at that very second) or filesystem corruption (a couple of times I forced a check just to make sure nothing was wrong, and nothing ever was).

    I can't testify to the relative quality of ext3 compared to ReiserFS, but I can certainly say I have been quite pleased with the stability of ext3.

  • Hell you can get blazing speed out of FAT, but do you want to use it? EXT3 turned me off the second I founoutit it's journeling was a 'bolt on' addition. (Metadata is kept is a private file...very ugly)

    ReiserFS has eaten more megabytes then I would have liked...but that was 2 years ago. Comparing Resier which is a mature, next generation FS to EXT3, a revamp which isn't even done yet, is a bad idea.
    • I understand your point. But their point was (paraphrased):
      "We need to choose a file system. Let's try to experimentally determine which of out two prime contenders is best."

      You may feel that their selection of contenders is incorrect, but to select between them based on experiment is called "the experimental method" (sometimes mistakenly "the scientific method". This is the basis of science, engineering, and technology. I.e.: Don't assume ahead of time that you know the right answer, check.

      If they didn't find the problems that you expected, then perhaps you need to examine why. But a hand-waving "explanation" doesn't explain very much, so I don't even really know what problems you think they should have found. FWIW, I haven't noticed any instability problems with ext3.

  • by Jeremi ( 14640 ) on Friday July 12, 2002 @08:47PM (#3874626) Homepage
    Does anyone have info on which of these file systems might be the better one for glitch-free playback of multitrack uncompressed audio? (I'm thinking of up to 16 simultaneous streams, so effiicent throughput would be the priority -- BeOS's BFS was optimized for this sort of thing, but I don't know who in Linux-land has been focused on that aspect of performance)
  • I use both (Score:3, Insightful)

    by JebusIsLord ( 566856 ) on Friday July 12, 2002 @09:14PM (#3874716)
    I use ext3 in ordered mode for my "/" and "/usr" partitions for its data journaling, and reiserfs with -notail for my /tmp and /pub partitions (pub is an FTP/SMB fileserver, lots of activity). I think this is a good compromise between performance and non-corrupability (sp?)
    • Very clever. Except that ext3 is less stable than ReiserFS.
      • um any explaination, or is that a troll? ext3 does metadata AND data journalling and is forwards/backwards compatible with ext2 - what makes it unstable?
      • I haven't experienced any problems with ext3, and I've used it (light loads only) ever since it was a Red Hat standard file system.

        OTOH, a year (I think) earlier, when Mandrake released a Reiser file system option, I tried it, had disk corruption, and couldn't find any tools that helped recovery.

        Now these are single data points, so you shouldn't take them too seriously. Also, around the same time that I had file corruption under Reasser, I also had an ext2 file system become corrupt. I even know that the problem was caused by fsck. (I was running from a secondary hard disk. I think that this may have been a kernel problem.) The point is, I was able to recover from the ext2 file system corruption, but was unable to recover from the Reisser file system corruption.

        So I didn't find either system to be more reliable than the other. But ext2 was recoverable, and I was unable to recover the Reiser file system.

        Again, let me stress, this was under light use. The system was one that I was using for development and experimentation, not one that I did serving from or kept serious data on. So usage patterns wouldn't match a production machine.

  • ...what do all those angry spacemen have to do with any of this?
  • I don't like the fact that ext3 is now included as a module. The default filesystem driver should be compiled as part of the kernel.

    SGI's version of Red Hat is far preferable to Red Hat's own release for this reason.

    Now, I must create and maintain an initrd on my IDE system (which was never required before), and I've also been in a crazy situation where attempting to mount an installed filesystem under ext3 caused and Oops, but changing fstab to ext2 was fine.

    Down with Red Hat's use of ext3 as a module! Red Hat has never handled journaling in a reasonable manner.

It's fabulous! We haven't seen anything like it in the last half an hour! -- Macy's