Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

Ext4 Data Losses Explained, Worked Around 421

ddfall writes "H-Online has a follow-up on the Ext4 file system — Last week's news about data loss with the Linux Ext4 file system is explained and new solutions have been provided by Ted Ts'o to allow Ext4 to behave more like Ext3."
This discussion has been archived. No new comments can be posted.

Ext4 Data Losses Explained, Worked Around

Comments Filter:
  • by Em Emalb ( 452530 ) <ememalbNO@SPAMgmail.com> on Thursday March 19, 2009 @12:50PM (#27258731) Homepage Journal

    User: My data, it's gone!
    EXT4:"Ext4 developer Ted Ts'o stresses in his answer to the bug report that Ext4 behaves precisely as demanded by the POSIX standard for file operations."

    Solution: WORKS AS DESIGNED

    • Re:LOL: Bug Report (Score:5, Insightful)

      by Z00L00K ( 682162 ) on Thursday March 19, 2009 @12:58PM (#27258863) Homepage Journal

      This is the problem with new features - the users have problems using them until they fully understands and appreciates the advantages and disadvantages.

      And also consider - ext4 is relatively new, so it will improve over time. If you want stability stick to ext3 or ext2. If you want a really stupid filesystem go FAT and prepare for a patent attack.

      • Re:LOL: Bug Report (Score:5, Insightful)

        by von_rick ( 944421 ) on Thursday March 19, 2009 @01:15PM (#27259141) Homepage

        And also consider - ext4 is relatively new, so it will improve over time. If you want stability stick to ext3 or ext2.

        QFT

        The filesystem was first released sometime towards the end of December 2008. The Linux distros that incorporated it, gave it as an option, but the default for /root and /home was always EXT3.

        In addition, this problem is not a week old like the article states. People have been discussing this problem on forums ever since mid-January, when the benchmarks for EXT4 were published and several people decided to try it out to see how it fares. I have been using EXT4 for my /root partition since January. Fortunately I haven't had any data loss, but if I do end up losing some data, I'd understand that since I have been using a brand new file-system which has not been thoroughly tested by users, nor has it been used on any servers that I know of.

        • Re: (Score:3, Insightful)

          You have a separate partition for /root ? How large can the home folder of the root user be?

      • Re:LOL: Bug Report (Score:5, Insightful)

        by try_anything ( 880404 ) on Thursday March 19, 2009 @01:48PM (#27259617)

        This is the problem with new features - the users have problems using them until they fully understands and appreciates the advantages and disadvantages.

        Advantages: Filesystem benchmarks improve. Real performance... I guess that improves, too. Does anybody know?

        Disadvantages: You risk data loss with 95% of the apps you use on a daily basis. This will persist until the apps are rewritten to force data commits at appropriate times, but hopefully not frequently enough to eat up all the performance improvements and more.

        Ext4 might be great for servers (where crucial data is stored in databases, which are presumably written by storage experts who read the Posix spec), but what is the rationale for using it on the desktop? Ext4 has been coming for years, and everyone assumed it was the natural successor to ext3 for *all* contexts where ext3 is used, including desktops. I hope distros don't start using or recommending ext4 by default until they figure out how to configure it for safe usage on the desktop. (That will happen long before the apps are rewritten.) Filesystem benchmarks be damned.

        • Re:LOL: Bug Report (Score:4, Interesting)

          by causality ( 777677 ) on Thursday March 19, 2009 @02:04PM (#27259871)

          Disadvantages: You risk data loss with 95% of the apps you use on a daily basis. This will persist until the apps are rewritten to force data commits at appropriate times, but hopefully not frequently enough to eat up all the performance improvements and more.

          For those of us who are not so familiar with the data loss issues surrounding EXT4, can someone please explain this? The first question that came to mind when I read that is "why would the average application need to concern itself with filesystem details?" I.e. if I ask OpenOffice to save a file, it should do that the exact same way whether I ask it to save that file to an ext2 partition, an ext3 partition, a reiserfs partition, etc. What would make ext4 an exception? Isn't abstraction of lower-level filesystem details a good thing?

          • Re:LOL: Bug Report (Score:5, Interesting)

            by swillden ( 191260 ) <shawn-ds@willden.org> on Thursday March 19, 2009 @02:24PM (#27260213) Journal

            The first question that came to mind when I read that is "why would the average application need to concern itself with filesystem details?"

            They don't. Applications just need to concern themselves with the details of of the APIs they use, and the guarantees those APIs do or don't provide.

            The POSIX file APIs specify quite clearly that there is no guarantee that your data is on the disk until you call fsync(). The problem is with applications that assumed they could ignore what the specification said just because it always seemed to work okay on the file systems they tested with.

            • Re:LOL: Bug Report (Score:5, Insightful)

              by causality ( 777677 ) on Thursday March 19, 2009 @02:46PM (#27260521)

              The first question that came to mind when I read that is "why would the average application need to concern itself with filesystem details?"

              They don't. Applications just need to concern themselves with the details of of the APIs they use, and the guarantees those APIs do or don't provide.

              The POSIX file APIs specify quite clearly that there is no guarantee that your data is on the disk until you call fsync(). The problem is with applications that assumed they could ignore what the specification said just because it always seemed to work okay on the file systems they tested with.

              Thanks for explaining that. In that case, I salute Mr. Tso and others for telling the truth and not caving in to pressure when they are in fact correctly following the specification. Too often people who are correct don't have the fortitude to value that more than immediate convenience, so this is a refreshing thing to see. Perhaps this will become the sort of history with which developers are expected to be familiar.

              I imagine it will take a lot of work but at least with Free Software this can be fixed. That's definitely what should happen, anyway. There are sometimes when things just go wrong no matter how correct your effort was; in those cases, it makes sense to just deal with the problem in the most hassle-free manner possible. This, however, is not one of those times. Thinking that you can selectively adhere to a standard and then claim that you are compliant with that standard is just the sort of thing that really should cause problems. Correcting the applications that made faulty assumptions is therefore the right way to deal with this, daunting and inconvenient though that may be.

              Removing this delayed-allocation feature from ext4 or placing limits on it that are not required by the POSIX standard is definitely the wrong way to deal with this. To do so would surely invite more of the same. It would only encourage developers to believe that the standards aren't really important, that they'll just be "bailed out" if they fail to implement them. You don't need any sort of programming or system design expertise to understand that, just an understanding of how human beings operate and what they do with precedents that are set.

            • Re:LOL: Bug Report (Score:5, Insightful)

              by AigariusDebian ( 721386 ) <`gro.naibed' `ta' `suiragia'> on Thursday March 19, 2009 @02:53PM (#27260613) Homepage

              1) Modern filesystems are expected behave better than POSIX demands.

              2) POSIX does not cover what should happen in a system crash at all.

              3) The issue is not about saving data, but the atomicity of updates so that either the new data or the old data would be saved at all times.

              4) fsync is not a solution, because ir forces the operation to complete *now*, which is counterproductive to write performance, cache coherence, laptop battery life, excessive SSD wear and a bunch of other reasons.

              We don't need reliable data-on-disk-now, we need reliable old-or-new data without using a sledgehammer of fsync.

              • Bollocks (Score:3, Interesting)

                by Colin Smith ( 2679 )

                A filesystem is not a Database Management System. It's purpose is to store files. If you want transactions, use a DBMS. There are plenty out there which use fsync correctly. Try SQLite.

                 

              • Re:LOL: Bug Report (Score:5, Insightful)

                by blazerw ( 47739 ) on Thursday March 19, 2009 @04:31PM (#27261947)

                1) Modern filesystems are expected behave better than POSIX demands.

                2) POSIX does not cover what should happen in a system crash at all.

                3) The issue is not about saving data, but the atomicity of updates so that either the new data or the old data would be saved at all times.

                4) fsync is not a solution, because ir forces the operation to complete *now*, which is counterproductive to write performance, cache coherence, laptop battery life, excessive SSD wear and a bunch of other reasons.

                We don't need reliable data-on-disk-now, we need reliable old-or-new data without using a sledgehammer of fsync.

                1. POSIX is an API. It tries not to force the filesystem into being anything at all. So, for instance, you can write a filesystem that waits to do writes more efficiently to cut down on the wear of SSDs.
                2. Ext3 has a max 5 second delay. That means this bug exists in Ext3 as well.
                3. If you have important data that if not written to the hard drive will cause catastrophic failure, then you use the part of the API that forces that write.
                4. Atomicity does not guarantee the filesystem be synchronized with cache. It means that during the update no other process can alter the affected file and that after the update the change will be seen by all other processes.

                We don't need a filesystem that sledgehammers each and every byte of data to the hard drive just in case there is a crash. What we DO need is a filesystem that can flexibly handle important data when told it is important, and less important data very efficiently.

                What you are asking is that the filesystem be some kind of sentient all knowing being that can tell when data is important or not and then can write important data immediately and non-important data efficiently. I think that it is a little better to have the application be the one that knows when it's dealing with important data or not.

                • Re:LOL: Bug Report (Score:4, Informative)

                  by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Thursday March 19, 2009 @05:08PM (#27262365) Homepage

                  3. If you have important data that if not written to the hard drive will cause catastrophic failure, then you use the part of the API that forces that write.

                  You completly missed the point. The new data isn't important, it could be lost and nobody would care. The troublesome part is that you lose the old data too. If you would lose the last 5 minutes of changes in your KDE config that would be a non-issue, what however happens is that you not just lose the last few changes, but your complete config, it ends up as 0 byte files, which is a state that the filesystem never had.

                • Re:LOL: Bug Report (Score:5, Insightful)

                  by spitzak ( 4019 ) on Thursday March 19, 2009 @05:30PM (#27262591) Homepage

                  You don't understand the problem.

                  You are wrong when you say EXT3 has this problem. It does not have it. If the EXT3 system crashes during those 5 seconds, you either get the old file or the new one. For EXT4, if it crashes, you can get a zero-length file, with both the old and new data lost.

                  The long delay is irrelevant and is confusing people about this bug. In fact the long delay is very nice in EXT4 as it means it is much more efficient and will use less power. I don't really mind if a crash during this time means I lose the new version of a file. But PLEASE don't lose the old one as well!!! That is inexcusable, and I don't care if the delay is .1 second.

            • Re:LOL: Bug Report (Score:5, Informative)

              by zenyu ( 248067 ) on Thursday March 19, 2009 @03:47PM (#27261367)

              They don't. Applications just need to concern themselves with the details of of the APIs they use, and the guarantees those APIs do or don't provide.

              Yup, and the problem has existed with KDE startup for years. I remember the startup files getting trashed when Mandrake first came out and I tried KDE for long enough to get hooked, and it's happened to me a few times a year ever since with every filesystem I've used. I just make my own backups of the .kde directory and fix this manually when it happens. I'm pretty good at this restore by now. Hopefully this bug in KDE will get fixed now that it is causing the KDE project such great embarrassment. I had a silent wish Tso would increase the default commit interval to 10 minutes when the first defenders of the KDE bug started squawking, but he's was too gracious for that.

              PS I use a lot of experimental graphics drivers for work, hence lockups during startup are common enough that I probably see this KDE bug more than most KDE users. But they really violate every rule of using config files: 1st. open with minimum permission needed, in this case read only, unless a write is absolutely necessary. 2nd. only update a file when it needs updating. 3rd. when updating a config file make a copy, commit it to disk, and then replace the original, making sure file permissions and ownership are unchanged, then commit the rename if necessary.

              PS2 Those computer users saying an fsync will kill performance need to get cluebat applied to them by the nearest programmer. 1st. There will be no fsyncs of config files at startup once the KDE startup is fixed. 2nd. fsyncs on modern filesystems are pretty fast, ext3 is the rare exception to that norm; this will be non-noticable when you apply a settings change. 3rd. These types of programming errors are not the norm; I've graded first and second year computer science classes and each of the three major mistakes made would have lost you 20-30% of your score for the assignment.

              • Re:LOL: Bug Report (Score:5, Insightful)

                by somenickname ( 1270442 ) on Thursday March 19, 2009 @05:17PM (#27262477)

                fsyncs have other nasty side effects other than performance. For example, in Firefox 3, places.sqlite is fsynced after every page is loaded. For a laptop user, this behavior is unacceptable as it prevents the disks from staying spun down (not to mention the infuriating whine it creates to spin the disk up after every or nearly every page load). The use of fsync in Firefox 3 has actually caused some people (myself included), to mount ~/.mozilla as tmpfs and just write a cron job to write changed files back to disk once every 10 minutes.

                So, while I'm all for applications using fsync when it's really needed, the last thing I'd like to see every application on the planet sprinkling their code with fsync "just to be sure".

              • Re:LOL: Bug Report (Score:5, Interesting)

                by Cassini2 ( 956052 ) on Thursday March 19, 2009 @05:22PM (#27262517)

                PS2 Those computer users saying an fsync will kill performance need to get cluebat applied to them by the nearest programmer.
                1st. There will be no fsyncs of config files at startup once the KDE startup is fixed.

                KDE isn't fixed right now. Additionally, KDE is not the only application that generates lots of write activity. I work with real-time systems, and write performance on data collection systems is important.

                2nd. fsyncs on modern filesystems are pretty fast, ext3 is the rare exception to that norm; this will be non-noticable when you apply a settings change.

                I did some benchmarks on the ext3 file system, the ext4 system without the patch, and the ext4 system with the patch. Code followed the open(), write(), close() sequence was 76% faster than the code with fsync(). Code that followed the open(), write(), close(), rename() sequence was 28% faster than code with that followed the open(), write(), fsync(), close(), rename() sequence. Additionally, the benchmarks were not significantly affected by the presence which file system was used (ext3, ext4, or ext4 patched.) You can look up the spreadsheet and the discussion at the launchpad discussion. [launchpad.net]

                3rd. These types of programming errors are not the norm; I've graded first and second year computer science classes and each of the three major mistakes made would have lost you 20-30% of your score for the assignment.

                Major Linux file backup utilities, like tar, gzip, and rsync don't use fsync as part of normal operations. The only application of the three, tar, that uses fsync, only uses it when verifying data is physically written to disk. In that situation, it writes the data, calls fsync, calls ioctl(FDFLUSH), and the reads the data back. Strictly speaking, that is the only way to make sure the file is written to disk, and is readable.

                Finally, as Theodore Ts'o has pointed out, if you really want to make sure the file is saved to disk, you also have to fsync() the directory too. I have never seen anyone do that, as part of a normal file save. Most C programming textbooks simply have fopen, fwrite, fclose as the recommended way to save files. Calling fsync this often is unusual for most C programmers.

                I would hate to be in your programming class. Your enforcing programming standards that aren't followed by key Linux utilities, aren't in most textbooks, and aren't portable to non-Linux file systems.

                If you require your students to fsync() the file and the directory, as part of a normal assignment, you are requiring them to do things that aren't done by any Linux utility out there. Further, if you are that paranoid, you better follow the example from the tar utility, and after the fsync completes, read all the data back to verify it was successfully written.

          • Re: (Score:3, Informative)

            by PitaBred ( 632671 )
            Basically, the spec was written one way, but the actual behavior was slightly different. Even though the standard didn't guarantee something to be written, most filesystems did it anyway. When EXT4 didn't write things immediately to improve performance, the applications that depended on filesystems writing data ASAP (even though it wasn't required behavior) started risking data loss in case of a crash and data not being written explicitly.
            br/> The mechanism (fsync) has been around for ages, it's just tha
          • Re: (Score:3, Informative)

            by MikeBabcock ( 65886 )

            You don't risk any data loss, ever, if you shut down your system properly. The system will sync the data to disk as expected and everything will be peachy. You risk data loss if you lose power or otherwise shut down at an inopportune time and the data hasn't been sync'd to disk yet.

            That is to say, 99% of people who use their computers properly won't have a problem.

            Also note, the software you use should be doing something like:

            loop: write some data, write some more data, finish writing data, fsync the data

            • Re: (Score:3, Informative)

              by spitzak ( 4019 )

              ARRGH! This has nothing to do with the data being written "soon".

              The problem with EXT4 is that people expect the data to be written before the rename!

              Fsync() is not the solution. We don't want it written now. It is ok if the data and rename are delayed until next week, as long as the rename happens after the data is in the file!

  • by morgan_greywolf ( 835522 ) on Thursday March 19, 2009 @12:52PM (#27258767) Homepage Journal

    FTFA, this is the problem:

    Ext4, on the other hand, has another mechanism: delayed block allocation. After a file has been closed, up to a minute may elapse before data blocks on the disk are actually allocated. Delayed block allocation allows the filing system to optimise its write processes, but at the price that the metadata of a newly created file will display a size of 0 bytes and occupy no data blocks until the delayed allocation takes place. If the system crashes during this time, the rename() operation may already be committed in the journal, even though the new file still contains no data. The result is that after a crash the file is empty: both the old and the new data have been lost.

    And now my question: Why did the Ext4 developers make the same mistakes Reiser and XFS both made (and later corrected) years ago? Before you get to write any filesystem code, you should have to study how other people have done it, including all the change history. Seriously.

    Those who fail to learn the lessons of [change] history are doomed to repeat it.

    • Re: (Score:2, Insightful)

      Speaking as someone who has developed OS commercial code (OS/2), I always assumed that the person before me understood what they were doing; because, if you didn't, you were spending all your time researching how the 'wheel' was invented. Also, aside from this very rare occurrence, it is pretty arrogant to think that your predecessors are incompetent or, to be generous, ignorant.

      This problem is just something that slipped through the cracks and I'm sure the originator of this bug is kicking himself in the

    • Re: (Score:3, Insightful)

      by dotancohen ( 1015143 )

      Before you get to write any filesystem code, you should have to study how other people have done it...

      No. Being innovative means being original, and that means taking new and different paths. Once you have seen somebody else's path, it is difficult to go out on your own original path. That is why there are alpha nad beta stages to a project, so that watchful eyes can find the mistakes that you will undoubtedly make, even those that have been made before you.

      • Making the same mistakes someone else made is NOT being innovative, it's being stupid or ignorant... or a number of other predicate adjectives.

        Innovation is using something in a new way, not making the same mistake in a new way. That's still considered a mistake, and if it can be shown that you should have known about the mistake from someone else making it, you're still "making the same mistake" and not "innovating." Not to say you're not going to make mistakes and not know everything, but it's still a

    • No kidding (Score:5, Insightful)

      by Sycraft-fu ( 314770 ) on Thursday March 19, 2009 @01:36PM (#27259445)

      All the stuff with Ext4 strikes me as amazingly arrogant, and ignorant of the past. The issue that FS authors, well any authors of any system programs/tools/etc need to understand is that your tool being usable is the #1 important thing. In the case of a file system, that means that it reliably stores data on the drive. So, if you do something that really screws that over, well then you probably did it wrong. Doesn't matter if you fully documented it, doesn't matter if it technically "follows the spec" what matters is that it isn't usable.

      I mean I could write a spec for a file system that says "No write is guaranteed to be written to disk until the OS is shut down, everything can be cached in RAM for an indefinite amount of time." However that'd be real flaky and lead to data loss. That makes my FS useless. Doesn't matter if it is well documented, what matters is that the damn thing loses data on a regular basis.

      I'd give these guys more credit if I was aware of any other major OS/FS combo that did shit like this, but I'm not. Linux/Ext3 doesn't, Windows/NTFS doesn't, OS-X/HFS+ doesn't, Solaris/ZFS doesn't, etc. Well that tells me something. That says that the way they are doing things isn't a good idea. If it is causing problems AND it is something else nobody else does, then probably you ought not do it.

      This is just bad design, in my opinion.

      • Re: (Score:3, Insightful)

        by mr_mischief ( 456295 )

        It does store data reliably on the drive that has been properly synchronized by the application's author. This data that is lost is what has been sent to a filehandle but not yet synchronized when the system loses power or crashes.

        The FS isn't the problem, but it is exposing problems in applications. If you need your FS to be a safety net for such applications, nobody is taking ext3 away just because ext4 is available. IF you want the higher performance of ext4, buy a damn UPS already.

        • Re:No kidding (Score:5, Informative)

          by Tacvek ( 948259 ) on Thursday March 19, 2009 @03:40PM (#27261275) Journal

          I don't think you have it right.

          On Ext3 with "data=ordered" (a default mount option), if one writes the file to disk, and then renames the file, ext3 will not allow the rename to take place until after the file has been written to disk.

          Therefore if an application that wants to change a file uses the common pattern of writing to a temporary file and then renaming (the renaming is atomic on journaling file systems), if the system crashes at any point, when it reboots the file is guaranteed to be either the old version or the new version.

          With Ext4, if you write a file and then rename it, the rename can happen before the write. Thus if the computer crashes between the rename and the write, on reboot the result will be a zero byte file.

          The fact that the new version of the file may be lost is not the issue. The issue is that both versions of the file may be lost.

          The end result is the write and rename method of ensuring atomic updates to files does not work under Ext4.

          A new mount option that forces the rename to come after the data is written to disk is being added. Once that is available, the problem will be gone if you use that mount option. Hopefully it will be made a default mount option.

      • by diegocgteleline.es ( 653730 ) on Thursday March 19, 2009 @02:46PM (#27260507)

        "No write is guaranteed to be written to disk until the OS is shut down, everything can be cached in RAM for an indefinite amount of time." However that'd be real flaky and lead to data loss. That makes my FS useless. Doesn't matter if it is well documented, what matters is that the damn thing loses data on a regular basis.

        It turns out that all the modern operative systems work exactly like that. In ALL of them you need to use explicit syncronization (fsync and friends) to get a notification that your data has really been written to disk (and that's all what you get, a notification, because the system could oops before fsync finishes). You also can mount your filesystem as "sync", which sucks.

        Journaling, COW/transaction-based filesystems like ZFS only guarantee the integrity, not that your data is safe. It turns out that Ext3 has the same problem, it's just that the window is smaller (5 seconds). And I wouldn't bet that HFS and ZFS have not the same problem (btrfs is COW and transaction based, like ZFS, and has the same problem).

        Welcome to the real world...

        • by Tacvek ( 948259 ) on Thursday March 19, 2009 @03:52PM (#27261439) Journal

          The Ext3 5 seconds thing is true, but that is not the important difference.

          On Ext3, with the default mount options, if one writes a file to disk, and then renames the file the write is guarantee to come before the rename. This can be used to ensure atomic updates to files, by writing a temporary copy of the file with the desired changes, and then renaming the file.

          On Ext4, if one writes a file to the disk, and then renames the file, the rename can happen first. The result of this is that it is not possible to ensure atomic updates to files unless one uses fsync between the writing and the renaming. However, that would hurt performance, since fsync will force the file to be committed to disk right now, when all that is really important is that it is committed to disk before the rename is.

          Thankfully the Ext4 module will be gaining a new mount option that will ensure that a file is written to disk before the renaming occurs. This mount option should have no real impact on performance, but will ensure the atomic update idiom that works on Ext3 will also work on Ext4.

      • Re: (Score:3, Informative)

        The issue that FS authors, well any authors of any system programs/tools/etc need to understand is that your tool being usable is the #1 important thing.

        Part of usability is performance. This is a significant performance improvement.

        So, if you do something that really screws that over, well then you probably did it wrong. Doesn't matter if you fully documented it, doesn't matter if it technically "follows the spec" what matters is that it isn't usable.

        The real problem here is that application developers were relying on a hack that happened to work on ext3, but not everywhere else.

        Let me ask you this -- should XFS change the way it behaves because of this? EXT4 is doing exactly what XFS has done for decades.

        I mean I could write a spec for a file system that says "No write is guaranteed to be written to disk until the OS is shut down, everything can be cached in RAM for an indefinite amount of time." However that'd be real flaky and lead to data loss.

        No, that's actually precisely what the spec says, with one exception: You can guarantee it to be written to disk by calling fsync.

        I'd give these guys more credit if I was aware of any other major OS/FS combo that did shit like this, but I'm not.

        Only because you haven't looked.

        In fact, t

      • Re: (Score:3, Insightful)

        I just posted in the wrong thread. Synopsis:

        I made a lot of money back in the 90's repairing NTFS installs. The similarity with it, back then, and EXT4 is they are/were young file systems.

        Give Ted and company a break. Let him get the damn thing fixed up (I have plenty of faith in Ted). Hell, I even remember losing an EXT3 file system back when it was fresh out of the gate. And I'm sure there's plenty who could say the same for all those you listed, including ZFS.

        And your comment about extended data caching.

    • As explained in the article - he hasn't made a mistake. The behaviour of ext4 is perfectly compatible with the POSIX standard.

      man fsync

    • Delayed block allocation allows the filing system to optimise its write processes, but at the price that the metadata of a newly created file will display a size of 0 bytes and occupy no data blocks until the delayed allocation takes place. [...] And now my question: Why did the Ext4 developers make the same mistakes Reiser and XFS both made (and later corrected) years ago? [...] Those who fail to learn the lessons of [change] history are doomed to repeat it.

      They tried to, but history was just a 0-byte file

  • by Spazmania ( 174582 ) on Thursday March 19, 2009 @12:53PM (#27258781) Homepage

    Ext4 developer Ted Ts'o stresses in his answer to the bug report that Ext4 behaves precisely as demanded by the POSIX standard for file operations.

    I couldn't disagree more:

    When applications want to overwrite an existing file with new or changed data [...] they first create a temporary file for the new data and then rename it with the system call - rename(). [...] Delayed block allocation allows the filing system to optimise its write processes, but at the price that the metadata of a newly created file will display a size of 0 bytes and occupy no data blocks until [up to 60 seconds later].

    Application developers reasonably expect that writes to the disk which happen far apart in time will happen in order. If I write to a file and then rename the file, I expect that the rename will not complete significantly before the write. Certainly not 60 seconds before the write. It seems dead obvious, at least to me, that the update of the directory entry should be deferred until after ext4 flushes that part of the file written prior to the change in the directory entry.

    • by Anonymous Coward on Thursday March 19, 2009 @02:47PM (#27260537)

      behaves precisely as demanded by the POSIX standard

      Application developers reasonably expect

      Apples and oranges. POSIX != "what app developers reasonably expect".

      Of course you have a point insofar as that just pointing to POSIX and saying it's a correct implementation of the spec is not enough, but let's be clear here that one of these things is not like the other.

    • Re: (Score:3, Informative)

      by nusuth ( 520833 )
      Application developers reasonably expect that writes to the disk which happen far apart in time will happen in order. If I write to a file and then rename the file, I expect that the rename will not complete significantly before the write. Certainly not 60 seconds before the write.

      That is sounds like a reasonable assumption but it is certainly not reasonable to write code that depends on that. 60 seconds is an eternity for a computer, but so is a second. Therefore the fact that 60 seconds is much longer

  • by girlintraining ( 1395911 ) on Thursday March 19, 2009 @12:54PM (#27258809)

    Short version: "We're sorry we changed something that worked and everyone was used to, but hey -- it's compliant with a standard." If this were Microsoft, we'd give them a healthy helping of humble pie, but because it's Linux and the magic word "POSIX" gets used, I'm sure we'll forgive them for it. The workaround is laughable -- "call fsync(), and then wait(), wait(), wait(), for the Wizard to see you." How about writing a filesystem that actually does journaling in a reliable fashion, instead of finger-pointing after the user loses data due to your snazzy new optimization and say "The developer did it! It wasn't us, honest." Microsoft does it and we tar and feather them, but the guys making the "latest and greatest" Linux feature we salute them?

    We let our own off with heineous mistakes while professionals who do the same thing we hang simply because they dared to ask to be paid for their effort. Lame.

    • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Thursday March 19, 2009 @01:02PM (#27258929) Homepage Journal

      But... those of us who learned the Ancient And Most Wise ways always triple-sync. We also sacrifice Peeps and use red food colouring in voodoo ceremonies (hey, it really is blood, so it should work) to keep the hardware from failing.

      On next week's Slashdot, there will be a brief tutorial on the right way to burn a Windows CD at the stake, and how to align the standing stones known as RAM Chips to points of astronomical significance.

    • Re: (Score:2, Interesting)

      No, we don't salute them. If you ask me, now matter what Ted T'so says about it complying with the POSIX standard, sorry, but it's a bug if it causes known, popular applications to seriously break, IMHO.

      Broken is broken, whether we're talking about Ted T'so or Microsoft.

    • Dunno (Score:5, Insightful)

      by Shivetya ( 243324 ) on Thursday March 19, 2009 @01:05PM (#27258983) Homepage Journal

      but if you want a write later file system shouldn't it be restricted to hardware that can preserve it?

      I understand that doing writes immediately when requested leads to performance degradation but that is why business systems which defer writes to disk only do so when the hardware can guarantee it. In other words, we have a battery backed cache, if the battery is low or nearing end of life the cache is turned off and all writes are made when the data changes.

      Trying to make performance gains to overcome limitations of the hardware never wins out.

      • Re: (Score:3, Informative)

        by MikeBabcock ( 65886 )

        Without write-back (that's delaying writes until later and keeping them in a cache), you lose elevator sorting. No elevator sorting makes heavy drive usage ridiculously slower than with.

        You can't re-sort and organize your disk activity without the ability to delay the data in a pool.

        The difference between EXT3 and EXT4 is not whether the data gets written immediately -- neither do that. The difference is how long they wait. EXT4 finally gives major power preservation by delaying writes until necessary so

    • Re: (Score:3, Insightful)

      by Dan667 ( 564390 )
      I believe a major difference is that Microsoft would just deny there was a problem at all. If they did acknowledge it, they certainly would not detail what it is.
    • wait(), wait(), wait(), for the Wizard to see you

      There's no place like /home.
      There's no place like /home.
      There's no place like /home.

    • Re: (Score:2, Informative)

      We let our own off with heineous mistakes while professionals who do the same thing we hang simply because they dared to ask to be paid for their effort. Lame.

      Is Ted Ts'o not professional? Does he not get paid? Ts'o's employed by the Linux Foundation, on leave from IBM. Free Software does not mean volenteer-made software!

    • by TheMMaster ( 527904 ) <hp@tmm.TWAINcx minus author> on Thursday March 19, 2009 @01:48PM (#27259629)

      Actually, no.

      Microsoft runs a proprietary show where they 'set the standard' themselves. Which basically means 'there is no standard except how we do it'.
      Linux, however, tries to adhere to standards. When it turns out that something doesn't adhere to standards, it gets fixed.

      Another problem is that most users of proprietary software on their proprietary OS don't have the sources to the software they use, so if the OS fixes something that was previously broken, but the software version used is 'no longer supported' the 'fix' in the OS breaks the users' software and the user has no option of fixing his software.

      THIS is why a) microsoft can't ever truly fix something and b) why using proprietary software screws over the user.

      Or would you rather have OSS software do the same as proprietary software vendors and work around problems forever but never fixing them? Saw that shiny 'run in IE7 mode' button in IE8? that's what you'll get...

    • by Hatta ( 162192 ) on Thursday March 19, 2009 @01:51PM (#27259667) Journal

      If this were Microsoft, we'd give them a healthy helping of humble pie, but because it's Linux and the magic word "POSIX" gets used, I'm sure we'll forgive them for it.

      You must be reading a different slashdot than I am. The popular opinion I see is that this is very bad design. If the spec allows this behavior, it's time to revisit the spec.

    • Short version: "We're sorry we changed something that worked and everyone was used to, but hey -- it's compliant with a standard." If this were Microsoft, we'd give them a healthy helping of humble pie

      If Microsoft simultaneously sacrificed backwards compatibility and correctly implemented a standard, we'd probably be left completely speechless.

    • voting (Score:4, Funny)

      by Skapare ( 16644 ) on Thursday March 19, 2009 @02:06PM (#27259905) Homepage

      So is this why we can't have voting (where correctness is paramount over performance) systems developed on Linux?

    • by Xtravar ( 725372 )

      What? When Microsoft made IE more standards-compliant, everyone was happy even if it broke legacy applications/sites.

      You, sir, are making no sense.

      If Microsoft broke stuff to make their OS POSIX compliant, we'd all be really happy!

    • Re: (Score:3, Informative)

      by stevied ( 169 ) *

      The "workaround" is understanding how the platform you're targeting actually works rather than making guesses. fsync() and even fdatasync() have been around for ages and are documented. *NIX directories have always just been more or less lists of (name,inode_no) tuples, which is why hard links are part of the platform. There isn't really any magical connection between an inode and the directories it happens to be listed in.

      Ted knows this stuff inside and out and is almost ridiculously reasonable compared to

    • Re: (Score:3, Insightful)

      by ChaosDiscord ( 4913 ) *

      "The workaround is laughable -- 'call fsync(), and then wait(), wait(), wait(), for the Wizard to see you.'"

      The "workaround" has been the standard for decades! Twenty years ago when I was learning programming I was warned: Until you call fsync(), you have no guarantee that your data has landed on disk. If you want to be sure the data is on the disk, call fsync(). While it's a complication for application developers, the benefit is that it allows filesystem developers to make the filesystem faster. Tha

  • by LotsOfPhil ( 982823 ) on Thursday March 19, 2009 @12:56PM (#27258831)

    ...new solutions have been provided by Ted Ts'o to...

    That's General Ts'o to you!

  • I sit just me? (Score:3, Insightful)

    by IMarvinTPA ( 104941 ) <IMarvinTPA@@@IMarvinTPA...com> on Thursday March 19, 2009 @01:04PM (#27258961) Homepage Journal

    I sit just me, or would you expect that the change would only be committed once the data was written to disk under all circumstances?
    To me, it sounds like somebody screwed up a part of the POSIX specification. I should look for the line that says "During a crash, loose the user's recently changed file data and wipe out the old data too."

    IMarv

    • Re: (Score:3, Funny)

      by Em Emalb ( 452530 )

      Nope, not just you, I sit also.

    • Someone above says that the POSIX standard is fine, but that ext4 violates it. Here is his quote:
      "When applications want to overwrite an existing file with new or changed data [...] they first create a temporary file for the new data and then rename it with the system call - rename("

      It seems that ext4 renames the file first, and then writes the file up to 60 seconds later.

      • by renoX ( 11677 )

        No, POSIX doesn't garantee write before you do a fsync, an added rename doesn't change this.

        This situation is identitical to read&write memory ordering: due to a cache, different CPU may see different value of a variable.
        Different architecture has different limitation on the way to reorganise read and write, with x86 it's not too bad but with the Alpha which can truly reorganise thing a lot it becomes very difficult to put all the needed memory barriers.

        IMHO, there is performance / usability tradeoff he

  • by victim ( 30647 ) on Thursday March 19, 2009 @01:05PM (#27258973)

    The workaround (flushing everything to disk before the rename) is a disaster for laptops or anything else which might wish to spin down a disk drive.

    The write-replace idiom is used when a program is updating a file and can tolerate the update being lost in a crash, but wants either the old or the new to be intact and uncorrupted. The proposed sync solution accomplishes this, but at the cost of spinning up the drive and writing the blocks at each write-replace. How often does your browser update a file while you surf? Every cache entry? Every history entry? What about your music player? Desktop manager? All of these will be spin up your disk drive.

    Hiding behind POSIX is not the solution. There needs to be a solution that supports write-replace without spinning up the disk drive.

    The ext4 people have kindly illuminated the problem. Now it is time to define a solution. Maybe it will be some sort of barrier logic, maybe a new kind of sync syscall. But it needs to be done.

    • Re: (Score:3, Insightful)

      by GMFTatsujin ( 239569 )

      If the issue is drive spin-up, how have the new generation of flash drives been taken into account? It seems to me that rotational drives are on their way out.

      That doesn't do anything for the contemporary generations of laptop, but what would the ramifications be for later ones?

    • Re: (Score:3, Informative)

      There needs to be a solution that supports write-replace without spinning up the disk drive.

      How do you intend on writing to the disk drive... without spinning it up? Is this not what you're asking? If this is indeed your question, the answer is already "by using a battery backed cache".

      BBH

  • by canadiangoose ( 606308 ) <djgraham&gmail,com> on Thursday March 19, 2009 @01:32PM (#27259381)
    If you mount your ext4 partitions with nodelalloc you should be fine. You will of course no longer benefit from the performance enhancements that delayed allocation bring, but at least you'll have all of your freaking data. I'm running Debian on Linux 2.6.29-rc8-git4, and so far my limited testing has shown this to be very effective.
  • by DragonWriter ( 970822 ) on Thursday March 19, 2009 @02:00PM (#27259815)

    I'm a hobbyist, and I don't program system level stuff, essentially, at all anymore, but way back when I did do C programming on Linux (~10 years ago), ISTR that this (from Ts'o in TFA) was advice you couldn't go anywhere without getting hit repeatedly over the head with:

    if an application wants to ensure that data have actually been written to disk, it must call the the function fsync() before closing the file.

    Is this really something that is often missed in serious applications?

    • Re: (Score:3, Informative)

      by Cassini2 ( 956052 )

      Calling fsync() excessively completely trashes system performance and usability. Essentially, operating systems have write back caches to speed code execution. fsync() disables the write back cache by writing data out immediately, and making your program wait while the flush happens. Modern computers can do activities that involve rapidly touching hundreds of files per second. Forcing each write to use an fsync() slows things down dramatically, and makes for a poor user experience.

      To make matters wors

  • Bad POSIX (Score:5, Interesting)

    by Skapare ( 16644 ) on Thursday March 19, 2009 @02:01PM (#27259823) Homepage

    Ext4, on the other hand, has another mechanism: delayed block allocation. After a file has been closed, up to a minute may elapse before data blocks on the disk are actually allocated. Delayed block allocation allows the filing system to optimise its write processes, but at the price that the metadata of a newly created file will display a size of 0 bytes and occupy no data blocks until the delayed allocation takes place. If the system crashes during this time, the rename() operation may already be committed in the journal, even though the new file still contains no data. The result is that after a crash the file is empty: both the old and the new data have been lost.

    Ext4 developer Ted Ts'o stresses in his answer to the bug report that Ext4 behaves precisely as demanded by the POSIX standard for file operations.

    If that is true, then to the extent that is true, POSIX is "broken". Related changes to a file system really need to take place in an orderly way. Creating a file, writing its data, and renaming it, are related. Letting the latter change persist while the former change is lost, is just wrong. Does POSIX really require this behavior, or just allow it? If it requires it, then IMHO, POSIX is indeed broken. And if POSIX is broken, then companies like Microsoft are vindicated in their non-conformance.

  • Easier Fix (Score:4, Insightful)

    by maz2331 ( 1104901 ) on Thursday March 19, 2009 @02:11PM (#27260001)

    Why not just make the actual "flushing" process work primarily on memory cache data - including any "renames", "deletes", etc.?

    If any "writes" are pending, then the other operations should be done in the strict order in which they were requested. There should be no pattern possible where cache and file metadata can be out of sync with one another.

  • POSIX (Score:4, Insightful)

    by 200_success ( 623160 ) on Thursday March 19, 2009 @02:41PM (#27260439)
    If I had wanted POSIX-compliant behavior, I could have gotten Windows NT! (Windows was just POSIX-compliant enough to be certified, but the POSIX implementation was so half-assed that it was unusable in practice.) Just because Ext4 complies with the minimum requirements of the spec doesn't make it right, especially if it trashes your data.
  • Kirk McKusick spent a lot of time working out the right order to write metadata and file data in FFS and the resulting file system, FFS with Soft Updates, gets high performance and high reliability... even after a crash.

  • by tytso ( 63275 ) * on Thursday March 19, 2009 @10:04PM (#27264493) Homepage

    It's really depressing that there are so many clueless comments in Slashdot --- but I guess I shouldn't be surprised.

    Patches to work around buggy applications which don't call fsync() have been around long before this issue got slashdotted, and before the Ubuntu Laundpad page got slammed with comments. In fact, I commented very early in the Ubuntu log that patches that detected the buggy applications and implicitly forced the disk blocks to disk were already available. Since then, both Fedora and Ubuntu are shipping with these workaround patches.

    And yet, people are still saying that ext4 is broken, and will never work, and that I'm saying all of this so that I don't have to change my code, etc ---- when in fact I created the patches to work around the broken applications *first*, and only then started trying to advocate that people fix their d*mn broken applications.

    If you want to make your applications such that they are only safe on Linux and ext3/ext4, be my guest. The workaround patches are all you need for ext4. The fixes have been queued for 2.6.30 as soon as its merge window opens (probably in a week or so), and Fedora and Ubuntu have already merged them into their kernels for their beta releases which will be released in April/May. They will slow down filesystem performance in a few rare cases for properly written applications, so if you have a system that is reliable, and runs on a UPS, you can turn off the workaround patches with a mount option.

    Applications that rely on this behaviour won't necessarily work well on other operating systems, and on other filesystems. But if you only care about Linux and ext3/ext4 file systems, you don't have to change anything. I will still reserve the right to call them broken, though.

  • by mr3038 ( 121693 ) on Friday March 20, 2009 @07:08AM (#27266671)

    The POSIX specifies that closing a file does not force it to permanent storage. To get that, you MUST call fsync() [manpagez.com] .

    So the required code to write a new file safely is:

    1. fd = fopen(...)
    2. fwrite(..., fd)
    3. fsync(fd)
    4. fclose(fd)

    The is no performance problem because fsync(fd) syncs only the requested file. However, that's in theory... use EXT3 and you'll quickly learn that fsync() is only able to sync the whole filesystem - it doesn't matter which file you ask it to sync, it will always sync the whole filesystem! Obviously that is going to be really slow.

    Because of this, way too many software developers have dropped the fsync() call to make the software usable (that is, not too slow) with EXT3. The correct fix is to change all the broken software and in the process that will make EXT3 unusable because of slow performance. After that EXT3 will be fixed or it will be abandoned. An alternative choice is to use fdatasync() instead of fsync() if the features of fdatasync() are enough. If I've understood correctly, EXT3 is able to do fdatasync() with acceptable performance.

    If any piece of software is writing to disk without using either fsync() or fdatasync() it's basically telling the system: the file I'm writing is not important, try to store it if you don't have better things to do.

"Mach was the greatest intellectual fraud in the last ten years." "What about X?" "I said `intellectual'." ;login, 9/1990

Working...