Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Operating Systems Software Linux

Kernel Hackers On Ext3/4 After 2.6.29 Release 316

microbee writes "Following the Linux kernel 2.6.29 release, several famous kernel hackers have raised complaints upon what seems to be a long-time performance problem related to ext3. Alan Cox, Ingo Molnar, Andrew Morton, Andi Keen, Theodore Ts'o, and of course Linus Torvalds have all participated. It may shed some light on the status of Linux filesystems. For example, Linus Torvalds commented on the corruption caused by writeback mode, calling it 'idiotic.'"
This discussion has been archived. No new comments can be posted.

Kernel Hackers On Ext3/4 After 2.6.29 Release

Comments Filter:
  • by rootnl ( 644552 ) on Wednesday March 25, 2009 @07:23AM (#27327795)

    The server is taking too long to respond; please wait a minute or 2 and try again.

    Mmmh, must be a big problem

  • Idiotic (Score:5, Informative)

    by baadger ( 764884 ) on Wednesday March 25, 2009 @07:27AM (#27327829)
  • by javilon ( 99157 ) on Wednesday March 25, 2009 @07:28AM (#27327843) Homepage

    this is what I get from http://lkml.org/lkml/2009/3/24/460 [lkml.org]:

    "The server is taking too long to respond; please wait a minute or 2 and try again."

    Considering that there is only one comment on this slashdot thread, that means that most people will comment without actually reading TFA.

    Like me... :-)

    • Re: (Score:3, Funny)

      I actually read it, and the emails from Linus, really good read, his performance was as usual,
      quite outstanding.

      • by linuxrocks123 ( 905424 ) on Wednesday March 25, 2009 @09:30AM (#27329391) Homepage Journal

        Actually, Linus was, as he sometimes is, completely clueless. He's unaware of the fact that filesystem journaling was *NEVER* intended to give better data integrity guarantees than an ext2-crash-fsck cycle and that the only reason for journaling was to alleviate the delay caused by fscking. All the filesystem can normally promise in the event of a crash is that the metadata will describe a valid filesystem somewhere between the last returned synchronization call and the state at the event of the crash. If you need more than that -- and you really, probably don't -- you have to do special things, such as running an OS that never, ever, ever crashes and putting a special capacitor in the system so the OS can flush everything to disk before the computer loses power in an outage.

        • by AigariusDebian ( 721386 ) <`gro.naibed' `ta' `suiragia'> on Wednesday March 25, 2009 @09:45AM (#27329577) Homepage

          On-disk state must always be consistent. That was the point of journalig, so that you do not have to do a fsck to get to a consistent state. You write to a journal, what you are planing to do, then you do it, then you activate it and mark done in the journal. At any point in time, if power is lost, the filesystem is in a consistant state - either the state before the operation or the state after the operation. You might get some half-written blocks, but that is perfectly fine, because they are not referenced in the directory structure until the final activation step is written to disk and those half-written bloxk are still considered empty by the filesystem.

        • All the filesystem can normally promise in the event of a crash is that the metadata will describe a valid filesystem somewhere between the last returned synchronization call and the state at the event of the crash. If you need more than that -- and you really, probably don't -- you have to do special things, such as running an OS that never, ever, ever crashes and putting a special capacitor in the system so the OS can flush everything to disk before the computer loses power in an outage.

          What about ZFS [wikipedia.org]? Doesn't ZFS have a bunch of checksumming and hardware failure tolerance functionality which you "probably need"?

        • Re: (Score:3, Informative)

          by gclef ( 96311 )

          Actually, he has a valid point: the user doesn't give a damn about whether their disk's metadata is consistent. They care about their actual data. If a filesystem is sacrificing user data consistency in favor of metadata consistency, then it's made the wrong tradeoff.

        • by Anonymous Coward on Wednesday March 25, 2009 @10:39AM (#27330279)

          No, you're the one who's clueless.

          The issue (as Linus said) isn't that the journalling is providing data integrity, it's that doing the journalling the wrong way causes *MORE* data loss.

          Basically, you're sacrificing data integrity for speed, when you don't need to.

          Perhaps you should work on your reading comprehension.

        • by WebCowboy ( 196209 ) on Wednesday March 25, 2009 @12:22PM (#27331963)

          Actually, Linus was, as he sometimes is, completely clueless. He's unaware of the fact that filesystem journaling was *NEVER* intended to give better data integrity guarantees than an ext2-crash-fsck cycle

          Linus is not clueless in this case. I think it is a case of you misinterpreting the issue he was discussing.

          Journaling is, as you say NOT about data integrity/prevention of data loss. That is what RAID and UPSes are for. However, it IS about data CONSISTENCY. Even if a file is overwritten, truncated or otherwise corrupted in a system failure (i.e. loss of data integrity) the journal is supposed to accurately describe things like "file X is Y bytes in length and resides in blocks 1,2,3...." (data/metadata consistency). Why would you update that information before you are sure the data was actually changed? A consistent journal is the WHOLE REASON why you can "alleviate the delay caused by fscking".

          Linus rightly pointed out, with a degree of tact that Theo de Raadt would be proud of, that writing meta-data before the actual data is committed to disk is a colossally stupid idea. If the journal doesn't accurately describe the actual data on the drive then what is the point of the journal? In fact, it can be LESS than useless if you implicitly trust the inconsistent journal and have borked data that is never brought to your attention.

  • by Puls4r ( 724907 ) on Wednesday March 25, 2009 @07:43AM (#27328005)
    The server is running linux.
  • by Anonymous Coward on Wednesday March 25, 2009 @07:47AM (#27328043)

    Quote from Linus:

    "...the idiotic ext3 writeback behavior. It literally does everything the wrong way around - writing data later than the metadata that points to it. Whoever came up with that solution was a moron. No ifs, buts, or maybes about it."

    In the interests of fairness... it should be fairly easy to track down the person or group of people who did this. Code commits in the Linux world seem to be pretty well documented.

      How about ASKING them rather than calling the Morons?

    (note: they may very well BE morons, but at least give them a chance to respond before being pilloried by Linus)

    TDz.

    • Re: (Score:3, Informative)

      Most likely Ted T'so, based on the git commit logs [kernel.org]. I say most likely because someone more familiar with the kernel git repo than myself should probably confirm or deny this statement.

    • by Anonymous Coward on Wednesday March 25, 2009 @08:09AM (#27328307)

      Torvalds exactly knows who it is and most people following the discussion will probably know it, too.
      Also, there has been a fairly public discussion including a statement by the responsible person in question.

      Not saying the name is Torvalds attempt at saving grace. Similar to a parent of two children saying, I don't know who did the mess, but if I come back, it better be cleaned up.

      Yes, Mr. Torvalds is fairly outspoken.

      • Re: (Score:3, Interesting)

        by gbjbaanb ( 229885 )

        hm. Similar to a parent of two children ranting at them without taking time to think first. Calling them morons is just going to get them growing up to be dysfunctional at best. No wonder the world has a dim view of the "geek" community.

        It seems to me that, as usual, the issue is not as clear cut as it first appears [slashdot.org]

        • Ahh... That link explains a lot. However, I have a different parenting strategy. If the kid does something wrong, let him know it. If he does something good let him know it too. Calling them a moron is ok, as long as its balanced out with genius every now and then. Of course, don't actually use the word, if the kid is a moron. Like Linus that should only be used to indicate a temporary lapse of judgment in an otherwise intelligent person.
        • The best ways to have person improve are positive and negative stimulation. Working systems are the positive stimulation, fellow programmers commenting on the dumb points of the design is the negative one.

          And you need both at all times, regardless what the politically correct view on education is floating currently.

      • by coryking ( 104614 ) * on Wednesday March 25, 2009 @09:38AM (#27329491) Homepage Journal

        Not saying the name is Torvalds attempt at saving grace

        Is the person responsible going to pull a classic political step-down where they resign "in order to spend more time with their family"?

        Maybe it was Hans Reiser? Sure the guy is locked up in San Quentin, but nobody knows how to hack a filesystem to bits better than Reiser. Bada ba ching! Thank you, thank you... I'll be here all night.

      • Re: (Score:3, Interesting)

        by mortonda ( 5175 )

        Torvalds exactly knows who it is and most people following the discussion will probably know it, too....

        Yes, Mr. Torvalds is fairly outspoken.

        Yes, and the folks in that conversation are very thick skinned and are used to such statements, it's just they way they communicate. Having Linus call you a moron is nothing. (and he's probably right) ;)

        How many times have I looked at my own code and asked, "What MORON came up with this junk?"

    • by Skuto ( 171945 ) on Wednesday March 25, 2009 @08:17AM (#27328415) Homepage

      Well, some Linux filesytem developers (and some fanboys) have been chastising other (higher-performance) filesytems for not providing the guarantees that ext3 ordered move provides.

      Application developers hence were indirectly educated to not use fsync(), because apparently a filesystem giving anything other than the ext3 ordered mode guarantees is just unreasonable, and ext3 fsync() performance really sucks. (The reason why you don't actually *want* what fsync implies has been explained in the previous ext4 data-loss posts).

      Some of those developers are now complaining that their "new" filesystem (designed to do away with the bad performance of the old one) is disliked by users who are losing data due to applications being encouraged to be written in a bad way, and telling the developers that they now should add fsync() anyway (instead of fixing the actual problem with the filesystem).

      Moreover, they are complaining that the application developers are "weird" because of expecting to be able to write many files to the filesystem and not having them *needlessly* corrupted. IMAGINE THAT!

      As an aside joke, the "next generation" btrfs which was supposed to solve all problems has ordered mode by default, but its an ordered mode that will erase your data in exactly the same way as ext4 does.

      Honestly, the state of filesystems in Linux is SO f***d that just blaming whoever added writeback mode is irrelevant.

      • by Ecuador ( 740021 ) on Wednesday March 25, 2009 @09:03AM (#27329069) Homepage

        Yep, we urgently need some kind of killer FS for Linux...

        Oh, wait...

      • Re: (Score:3, Informative)

        Honestly, the state of filesystems in Linux is SO f***d that just blaming whoever added writeback mode is irrelevant.

        I agree that the who-dun-it part is irrelevant. I disagree on the "SO f***d" part. We have three filesystems that write the journal prior to the data. Basically, we know the issue, and a similar fix can be shared amongst the three affected filesystems. We've had far more "f***d" situations than this (think etherbrick-1000) where hardware was being destroyed without a good understanding of what was happening. Everything will work out as it seems to have everyone's attention.

        BBH

        • Re: (Score:3, Insightful)

          by Skuto ( 171945 )

          I agree that the who-dun-it part is irrelevant. I disagree on the "SO f***d" part. We have three filesystems that write the journal prior to the data. Basically, we know the issue, and a similar fix can be shared amongst the three affected filesystems.

          I would be very surprised if the fix can be shared between the filesystems. At least the most serious among those involved, XFS, sits on a complete intermediate compatibility layer that makes Linux looks like IRIX.

          Linux filesytems are seriously in a bad state. You simply cannot pick a good one. Either you get one that does not actively kill your data (ext3 ordered/journal) or you pick one which actually gives decent performance (anything besides ext3).

          Obviously, we should have both. It's not like that is im

        • by Kjella ( 173770 )

          Would you care to make an educated guess on how many run one of said three filesystems - particularly ext3, compared to using an etherbrick-1000? Scale matters, even if it sucks equally much if *your* data was eaten by a one-in-a-billion freak bug or a common one.

      • by SpinyNorman ( 33776 ) on Wednesday March 25, 2009 @09:19AM (#27329273)

        fsync() (sync all pending driver buffers to disk) certainly has a major performance cost, but sometimes you do want to know that your data actually made it to disk - that's an entirely different issue from journalling and data/meta-data order of writes which is about making sure the file system is recoverable to some consistent state in the event of a crash.

        I think sometimes programmers do fsync() when they really want fflush() (flush library buffers to driver) which is about program behavior ("I want this data written to disk real-soon-now", not hanging around in the library buffer indefinitely) rather than a data-on-disk guarantee.

        IMO telling programmers to flatly avoid fsync is almost as bad as having a borked meta-data/data write order - progammers should be educated about what fsync does and when they really want/need it and when they don't. I'll also bet that if the file systems supported transactions (all-or-nothing journalling of a sequence of writes to disk), maybe via an ioctl(), that many people would be using that instead.

        • by Rich0 ( 548339 ) on Wednesday March 25, 2009 @09:49AM (#27329645) Homepage

          I agree. What we need is a mechanism for an application to indicate to the OS what kind of data is being written (in terms of criticality/persistance/etc). If it is the gimp swapfile chances are you can optimize differently for performance than if it is a file containing innodb tables.

          Right now app developers are having to be concerned with low-level assumptions about how data is being written at the cache level, and that is not appropriate.

          I got burned by this when my mythtv backend kept losing chunks of video when the disk was busy. Turns out the app developers had a tiny buffer in ram, which they'd write out to disk, and then do an fsync every few seconds. So, if two videos were being recorded the disk is contantly thrashing between two huge video files while also busy doing whatever else the system is supposed to be doing. When I got rid of the fsyncs and upped the buffer a little all the issues went away. When I record video to disk I don't care if when the system goes down that in addition to losing the next 5 minutes of the show during the reboot I also lose the last 20 seconds as well. This is just bad app design, but it highlights the problems when applications start messing with low-level details like the cache.

          Linux filesystems just aren't optimal. I think that everybody is more interested in experimenting with new concepts in file storage, and they're not as interested in just getting files reliably stored to disk. Sure, most of this is volunteer-driven, so I can't exactly put a gun to somebody's head to tell them that no, they need to do the boring work before investing in new ideas. However, it would be nice if things "just worked".

          We need a gradual level of tiers ranging from a database that does its own journaling and needs to know that data is fully written to disk to an application swapfile that if it never hits the disk isn't a big deal (granted, such an app should just use kernel swap, but that is another issue). The OS can then decide how to prioritize actual disk IO so that in the event of a crash chances are the highest priority data is saved and nothing is actually corrupted.

          And I agree completely regarding transaction support. That would really help.

          • Use fadvise (Score:3, Informative)

            by Chemisor ( 97276 )

            > We need a gradual level of tiers ranging from a database that does its own journaling
            > and needs to know that data is fully written to disk to an application swapfile that if
            > it never hits the disk isn't a big deal (granted, such an app should just use kernel swap,
            > but that is another issue).

            Actually there already is a syscall for telling the kernel how the file will be used.

            posix_fadvise (int fd, off_t offset, off_t len, int advice)

            POSIX_FADV_DONTNEED sounds like what you would use for your

        • by Skuto ( 171945 ) on Wednesday March 25, 2009 @10:08AM (#27329873) Homepage

          fsync() (sync all pending driver buffers to disk) certainly has a major performance cost, but sometimes you do want to know that your data actually made it to disk - that's an entirely different issue from journalling and data/meta-data order of writes which is about making sure the file system is recoverable to some consistent state in the event of a crash.

          The two issues are very closely related, not "an entirely different issue". What the apps want is not "put this data on the disk, NOW", but "put this data on the disk sometime, but do NOT kill the old data until that is done".

          Applications don't want to be sure that the new version is on disk. They want to be sure that SOME version is on disk after a crash. This is exactly what some people can't seem to understand.

          fsync() ensures the first at a huge performance cost. rename() + ext3 ordered gives you the latter. The problem is that ext4 breaks this BECAUSE of the journal ordering. The "consistent state" is broken for application data.

          I'll also bet that if the file systems supported transactions (all-or-nothing journalling of a sequence of writes to disk), maybe via an ioctl(), that many people would be using that instead.

          Yes. But they are assuming this exists and the API is called rename() :)

    • by red_dragon ( 1761 ) on Wednesday March 25, 2009 @08:25AM (#27328501) Homepage

      they may very well BE morons, but at least give them a chance to respond before being pilloried by Linus

      He's following Ext3 writeback semantics. You'll have to wait for a patch to fix his behaviour.

    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Wednesday March 25, 2009 @08:35AM (#27328649)
      Comment removed based on user account deletion
    • by Colin Smith ( 2679 ) on Wednesday March 25, 2009 @08:41AM (#27328739)

      Doesn't ext3 work in exactly the way mentioned? AIUI ordered data mode is the default.

      from the FAQ: http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html [sapienti-sat.org]

      "mount -o data=ordered"
                      Only journals metadata changes, but data updates are flushed to
                      disk before any transactions commit. Data writes are not atomic
                      but this mode still guarantees that after a crash, files will
                      never contain stale data blocks from old files.

      "mount -o data=writeback"
                      Only journals metadata changes, and data updates are entirely
                      left to the normal "sync" process. After a crash, files will
                      may contain stale data blocks from old files: this mode is
                      exactly equivalent to running ext2 with a very fast fsck on reboot.

      So, switching writeback mode to write the data first would simply be using ordered data mode, which is the default...
       

      • Re: (Score:3, Insightful)

        by Skuto ( 171945 )

        So, switching writeback mode to write the data first would simply be using ordered data mode, which is the default...

        The thread starts with someone having serious performance problems exactly because ext3 ordered mode is so slow in some circumstances...

        Like when you fsync().

    • I think it's safe to say that anyone capable of writing a filesystem module at all is far above the "moron" level on the human intelligence scale. Furthermore, anyone willing to volunteer their time by writing such software and donating it to the ungrateful world should be thanked, mistakes or not.

      Linus seems to have the wrong temperament for managing a project of humans.

  • by pla ( 258480 ) on Wednesday March 25, 2009 @07:47AM (#27328051) Journal
    FTA: "if you write your data _first_, you're never going to see corruption at all"

    Agreed, but I think this still misses the point - Computers go down unexpectedly. Period.

    Once upon a time, we all seemed to understand that, and considered writeback behavior (when rarely available) always a dangerous option only for use in non-production systems and with a good UPS connected. And now? We have writeback FS caching enabled by silent default, sometimes without even a way to disable it!

    Yes, it gives a huge performance boost... But performance without reliability means absolutely nothing. Eventually every computer will go down without enough warning to flush the write buffers.
    • by Skuto ( 171945 ) on Wednesday March 25, 2009 @07:59AM (#27328177) Homepage

      You are confusing writeback caching with ext3/4's writeback option, which is simply something different.

      The problem with all the ext3/ext4 discussions has been the ORDER in which things get written, not whether they are cached or not. (Hence the existance of an "ordered" mode)

      You want new data written first, and the references to that new data updated later, and most definitely NOT the other way around.

      Linus seems to understand this much better than the people writing the filesystems, which is quite ironic.

      • by AlterRNow ( 1215236 ) on Wednesday March 25, 2009 @08:08AM (#27328297)

        Am I right believing that the new data is written elsewhere and then the metadata is updated in place to point to the new data? I don't know much about filesystems..

        • Re: (Score:2, Informative)

          by AvitarX ( 172628 )

          It is by default, using the ordered journal type in Ext3.

          It is not an option yet in Ext4, and for now may not be the default, but an option to be set at mount time.

          Currently in Ext4, the meta data in journal is first updated, then the data written.

          When software assumes that it can send commands, and have them take place in the order sent this becomes problematic. Because without costly immediate writes there is a risk of losing very very old data, as the files metadata gets updated but the data not written

        • by Spazmania ( 174582 ) on Wednesday March 25, 2009 @09:15AM (#27329231) Homepage

          Here's what Linus had to say, and I think he hit the nail on the head:

          The point is, if you write your metadata earlier (say, every 5 sec) and
          the real data later (say, every 30 sec), you're actually MORE LIKELY to
          see corrupt files than if you try to write them together.

          And if you write your data _first_, you're never going to see corruption
          at all.

          This is why I absolutely _detest_ the idiotic ext3 writeback behavior. It
          literally does everything the wrong way around - writing data later than
          the metadata that points to it. Whoever came up with that solution was a
          moron. No ifs, buts, or maybes about it.

          • Yes, my initial comment was to ask whether the writing was to different blocks ( free? ) and not over-writing the old blocks ( which to me sounds very, very bad ).

            Is this what happens?

            1) Write new data to free blocks
            2) Update metadata to point to newly written blocks
            3) Mark old blocks as free

            And I guess with ext4 it is 2, 1, 3?

      • by Anonymous Coward on Wednesday March 25, 2009 @08:11AM (#27328345)

        Yes! This is the whole point. I am not a filesystem guy either. I don't even know that much about filesystems. But imagine you write a program with some common data storage. Imagine part of that common data is a pointer to some kind of matrix or whatever. Does anybody think it is a good idea to set that pointer first, and then initialize the data later?

        Sure, a realy robust program should be able to somehow recover from corrupt data. But that doesn't mean you can just switch your brain off when writing the data.

        • To be fair, at volatile memory, unless you have multi-threading and want non-blocking semantics (as if anybody actualy did that), it makes no difference.
      • by mysidia ( 191772 ) on Wednesday March 25, 2009 @08:12AM (#27328359)

        This is a potential problem when you are overwriting existing bytes or removing data.

        In that case, you've removed or overwritten the data on disk, but now the metadata is invalid.

        i.e. You truncated a file to 0 bytes, and wrote the data.

        You started re-using those bytes for a new file that another process is creating.

        Suddenly you are in a state where your metadata on disk is inconsistent, and you crash before that write completes.

        Now you boot back up.. you're ext3, so you only journal metadata, so that's the only thing you can revert, unfortunately, there's really nothing to rollback, since you haven't written any metadata yet.

        Instead of having a 0 byte file, you have a file that appears to be the size it was before you truncated it, but the contents are silently corrupt, and contain other-program-B's data

        • by Hatta ( 162192 ) on Wednesday March 25, 2009 @08:55AM (#27328947) Journal

          In that case, you've removed or overwritten the data on disk, but now the metadata is invalid.

          i.e. You truncated a file to 0 bytes, and wrote the data.

          Why on earth would you do that? Write the new data, update the metadata, THEN remove the old file.

        • Why can't the filesystem just update data and metadata in given order for a particular file? For example, if you truncate a file and then write to it, the following should happen:
          1. Metadata for `foo' is updated (length=0)
          2. New data for `foo' is written elsewhere
          3. Metadata for `foo' is updated (contents=new_data)

          If, on the other hand, you're doing the create-write-close-rename trick to get an "atomic file replace", then the following should happen:

          1. Metadata for `foo.new' is created (length=0)
          2. New data for `foo.
        • It's also an easily solved problem:

          After a truncf(), you lock the deleted blocks against a write until after you've written the updated metadata for the file. Until then, anything you write to the file will have to be allocated elsewhere on the disk. But then that's part of what the reserve slack is for: to increase the probability that there is somewhere else on the disk that you can write it.

        • Re: (Score:3, Informative)

          by Rich0 ( 548339 )

          This is more of a response to the 5 other replies to this comment - but rather than post it 5 times I'll just stick it here...

          What everybody else has proposed is the obvious solution, which is essentially copy-on-write. When you modify a block, you write a new block and then deallocate the old block. This is the way ZFS works, and it will also be used in btrfs. Aside from the obvious reliability improvement, it also can allow better optimization in RAID-5 configurations, as if you always flush an entire

      • Re: (Score:3, Insightful)

        Linus seems to understand this much better than the people writing the filesystems, which is quite ironic.

        It's common sense! Duh. Write data first, pointers to data second. If the system goes down, you're far less likely to lose anything. That's obvious. Those who think this is somehow not obvious don't have the right mentality to be writing kernel code.

        I think the problem is Ted T'so has had a slight 'works for me' attitude about it:

        All I can tell you is that *I* don't run into them, even when I was
        using ext3 and before I got an SSD in my laptop. I don't understand
        why; maybe because I don't get really nic

      • Re: (Score:3, Funny)

        by hey ( 83763 )

        Well, its not ironic. It would be ironic if the ext3/4 authors lost their code in a crash because of the order that the data was written.

      • Linus seems to understand this much better than the people writing the filesystems, which is quite ironic.

        You specifically have to choose writeback mode in the full knowledge that the datablocks will almost certainly be written after the metadata journal.

        I think Ted Tso etc are probably perfectly aware of how it works.

        Frankly I think Linus is trolling.

         

        • Re: (Score:3, Funny)

          by Skuto ( 171945 )

          You specifically have to choose writeback mode in the full knowledge that the datablocks will almost certainly be written after the metadata journal.

          I think Ted Tso etc are probably perfectly aware of how it works.

          Except that ext4 loses data in ordered mode for exactly the same reason, and we had a big fuss about that the last few weeks, because *someone* (cough) said that it's the application developers fault for not fsync()-ing.

        • ext4 by default had the equivalent of ext3 writeback mode on.

  • by Per Wigren ( 5315 ) on Wednesday March 25, 2009 @07:58AM (#27328173) Homepage

    If I were to setup a new home spare-part-server using software RAID-5 and LVM today, using kernel 2.6.28 or 2.6.29 and I really care about not losing important data in case of a power outage or system crash but still want reasonable performance (not run with -o sync), what would be my best choice of filesystem (EXT4 or XFS), mkfs and mount options?

    • Re: (Score:2, Interesting)

      by AvitarX ( 172628 )

      Ext3 with an ordered (default) style journal.

      I believe XFS has a similar option, and Ext4 will with the next kernel, but for a home type system Ext3 should meet all of your needs, and Linux utilities still know it best.

      Of course you should probably use RAID-10 too, with data disk space so cheap it is well worth it. Using the "far" disk layout, you get very fast reads, and though it penalizes writes (vs RAID 0) in theory, the benchmarks I have seen show that penalty to be smaller than the theory.

      as for mkfs

    • by remmelt ( 837671 ) on Wednesday March 25, 2009 @08:36AM (#27328665) Homepage

      You could also look into Sun's RAID-z:
      http://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID-Z [wikipedia.org]

    • by Blackknight ( 25168 ) on Wednesday March 25, 2009 @08:37AM (#27328689) Homepage

      Solaris 10 with ZFS, if you actually care about your data.

      • by JayAEU ( 33022 )

        If I recall correctly, BtrFS also does checksumming of individual files and has become available in the latest kernel as well, so it's easier to use with Linux.

        I wouldn't use it on a server just yet, since there might still be some changes to the ondisk format.

    • Re: (Score:3, Informative)

      with lvm, you can easily try out the various file systems (don't forget jfs!). Personally, I've found linux XFS to corrupt itself beyond repair, so I use ext3.
    • by mmontour ( 2208 ) <mail@mmontour.net> on Wednesday March 25, 2009 @08:53AM (#27328927)

      My advice:

      - Make regular backups; you'll need them eventually. Keep some off-site.
      - ext3 filesystem, default "data=ordered" journal
      - Disable the on-drive write-cache with 'hdparm'
      - "dirsync" mount option
      - Consider a "relatime" or "noatime" mount option to increase performance (depending on whether or not you use applications that care about atime)
      - If you don't want the performance hit from disabling the on-drive write-cache, add a UPS and set up software to shut down your system cleanly when the power fails. You are still vulnerable to power-supply failures etc. even if you have a UPS.
      - Schedule regular "smartctl" scans to detect low-level drive failures
      - Schedule regular RAID parity checks (triggered through a "/sys/.../sync_action" node) to look for inconsistencies. I have a software-RAID1 mirror and I've found problems here a few times (one of which was that 'grub' had written to only one of the disks of the md device for my /boot partition).
      - Periodically compare the current filesystem contents against one of your old backups. Make sure that the only files that are different are ones that you expected to be different.

      If you decide to use ext4 or XFS most of the above points will still apply. I don't have any experience with ext4 yet so I can't say how well it compares to ext3 in terms of data-preservation.

    • by mikeee ( 137160 )

      Why not try "-o sync"?

      Honestly, if it's a spare-part-server running on a typical home LAN, and is read-mostly, odds are reasonable you won't notice the difference.

      If it is too slow, then you can always go back and screw around with this other nonsense.

  • Geez... (Score:3, Funny)

    by hesaigo999ca ( 786966 ) on Wednesday March 25, 2009 @08:06AM (#27328271) Homepage Journal

    Tell us what you really think there Linus.

    ~I went home today knowing I made someone cry!~

  • ZFS (Score:4, Informative)

    by chudnall ( 514856 ) on Wednesday March 25, 2009 @08:57AM (#27328995) Homepage Journal
    Linux seriously needs to find a workaround to its licensing squabbles [blogspot.com] and find a way to get a rock-solid ZFS in the kernel. Right now, ZFS on OpenSolaris [opensolaris.org] is simply wonderful, and this is what I am deploying for file service at all my customer sites now. The scary thing about file system corruption is that it is often silent, and can go on for a long time, until your system crashes, and you find that all of your backups are also crap. I've replaced a couple of linux servers (and more than a couple of Windows servers) after filesystem and disk corruption compounded by naive RAID implementations (RAID[1-5] without end-to-end checksumming can make your data *less* safe), and my customers couldn't be happier. Having hourly snapshots [dzone.com] and a fast in-kernel CIFS server fully integrated with ZFS ACLS [sun.com] (and with support for NTFS-style mixed case naming) is jut icing on the cake. Now if only I could have an Opensolaris desktop with all the nice linux userland apps available. Oh wait, I can! [nexenta.org]
    • Re: (Score:2, Funny)

      by Anonymous Coward

      Linux seriously needs to find a workaround to its licensing squabbles and find a way to get a rock-solid ZFS in the kernel.

      You must have missed Linus's memo:

      To: Samuel J. Palmisano, IBM Business Guy
      From: Linus Torvalds, Super Genius

      Dear Sam:
      As you know, I've been trying to get a decent file system into Linux for a while. Let's face it, none of these johnny-come-lately open-source arseholes can write a file system to save their life; the last one to have a chance was Reiser, and I really don't want him han

    • Comment removed based on user account deletion
    • Re:ZFS (Score:4, Insightful)

      by Mr.Ned ( 79679 ) on Wednesday March 25, 2009 @03:55PM (#27335051)

      FreeBSD has ZFS. My understanding is while ZFS is a good filesystem, it isn't without issues. It doesn't work well on 32-bit architectures because of the memory requirements, isn't reliable enough to host a swap partition, and can't be used as a boot partition when part of a pool. Here's FreeBSD's rundown of known problems: http://wiki.freebsd.org/ZFSKnownProblems [freebsd.org].

      On the other hand, the new filesystems in the Linux kernel - ext4 and btrfs - are taking the lessons learned from ZFS. I'm excited about next-generation filesystems, and I don't think ZFS is the only way to go.

  • by ivoras ( 455934 ) <ivoras&fer,hr> on Wednesday March 25, 2009 @09:14AM (#27329215) Homepage

    Somebody's going to mention it so here it is: there was a BSD unix research project that ended as the soft-updates implementation (currently present in all modern free BSDs). It deals precisely with the ordering of metadata and data writes. The paper is here: http://www.ece.cmu.edu/~ganger/papers/softupdates.pdf [cmu.edu]. Regardless of what Linus says, soft-updates with strong ordering also do metadata updates before data updates, and also keeps tracks of ordering *within* metadata. It has proven to be very resilient (up to hardware problems).

    Here's an excerpt:

    We refer to this requirement as an update dependency, because safely writing the direc- tory entry depends on first writing the inode. The ordering constraints map onto three simple rules: (1) Never point to a structure before it has been initialized (e.g., an inode must be initialized before a directory entry references it). (2) Never reuse a resource before nullifying all previous pointers to it (e.g., an inode's pointer to a data block must be nullified before that disk block may be reallocated for a new inode). (3) Never reset the last pointer to a live resource before a new pointer has been set (e.g., when renaming a file, do not remove the old name for an inode until after the new name has been written). The metadata update problem can be addressed with several mecha- nisms. The remainder of this section discusses previous approaches and the characteristics of an ideal solution.

    There's some quote about this... something about those who don't know unix and about reinventing stuff, right :P ?

    • by LizardKing ( 5245 ) on Wednesday March 25, 2009 @10:14AM (#27329943)

      It has proven to be very resilient (up to hardware problems).

      No it hasn't, which is why it has been removed from NetBSD and replaced by a journaled filesystem. I've also heard grumblings from OpenBSD people about corrupted filesystems with softdep enabled.

    • Re: (Score:3, Insightful)

      by SpinyNorman ( 33776 )

      (1) Never point to a structure before it has been initialized

      Which surely includes writing data before meta-data (and write the data someplace other than where the old meta-data is pointing), which is what Linus was saying.

  • Fix it (Score:5, Funny)

    by Frankie70 ( 803801 ) on Wednesday March 25, 2009 @11:27AM (#27330989)

    Maybe Linus should just fixit instead of whining about it. It's open source, dammit.

Avoid strange women and temporary variables.

Working...