Ext4 Data Losses Explained, Worked Around 421
ddfall writes "H-Online has a follow-up on the Ext4 file system — Last week's news about data loss with the Linux Ext4 file system is explained and new solutions have been provided by Ted Ts'o to allow Ext4 to behave more like Ext3."
LOL: Bug Report (Score:5, Funny)
User: My data, it's gone!
EXT4:"Ext4 developer Ted Ts'o stresses in his answer to the bug report that Ext4 behaves precisely as demanded by the POSIX standard for file operations."
Solution: WORKS AS DESIGNED
Re:LOL: Bug Report (Score:5, Insightful)
This is the problem with new features - the users have problems using them until they fully understands and appreciates the advantages and disadvantages.
And also consider - ext4 is relatively new, so it will improve over time. If you want stability stick to ext3 or ext2. If you want a really stupid filesystem go FAT and prepare for a patent attack.
Re:LOL: Bug Report (Score:5, Insightful)
And also consider - ext4 is relatively new, so it will improve over time. If you want stability stick to ext3 or ext2.
QFT
The filesystem was first released sometime towards the end of December 2008. The Linux distros that incorporated it, gave it as an option, but the default for /root and /home was always EXT3.
In addition, this problem is not a week old like the article states. People have been discussing this problem on forums ever since mid-January, when the benchmarks for EXT4 were published and several people decided to try it out to see how it fares. I have been using EXT4 for my /root partition since January. Fortunately I haven't had any data loss, but if I do end up losing some data, I'd understand that since I have been using a brand new file-system which has not been thoroughly tested by users, nor has it been used on any servers that I know of.
Re: (Score:3, Insightful)
You have a separate partition for /root ? How large can the home folder of the root user be?
Re: (Score:3, Informative)
No, both of those are, implicitly, expected to be world readable, and at least usually for software that any user can run (to some degree of success). /root is the only place for root to put a local application (or any other files) that he doesn't want a user to be able to see at all.
Re: (Score:3, Insightful)
For many (most?) Unix admins, /root is just a nicer way to specify "/ filesystem" or "root filesystem". The path /root for root user's home directory is popular in Linux, but I never saw it in the Unixes I've used (but I don't know if that custom is a Linux invention.)
Re:LOL: Bug Report (Score:5, Insightful)
This is the problem with new features - the users have problems using them until they fully understands and appreciates the advantages and disadvantages.
Advantages: Filesystem benchmarks improve. Real performance... I guess that improves, too. Does anybody know?
Disadvantages: You risk data loss with 95% of the apps you use on a daily basis. This will persist until the apps are rewritten to force data commits at appropriate times, but hopefully not frequently enough to eat up all the performance improvements and more.
Ext4 might be great for servers (where crucial data is stored in databases, which are presumably written by storage experts who read the Posix spec), but what is the rationale for using it on the desktop? Ext4 has been coming for years, and everyone assumed it was the natural successor to ext3 for *all* contexts where ext3 is used, including desktops. I hope distros don't start using or recommending ext4 by default until they figure out how to configure it for safe usage on the desktop. (That will happen long before the apps are rewritten.) Filesystem benchmarks be damned.
Re:LOL: Bug Report (Score:4, Interesting)
For those of us who are not so familiar with the data loss issues surrounding EXT4, can someone please explain this? The first question that came to mind when I read that is "why would the average application need to concern itself with filesystem details?" I.e. if I ask OpenOffice to save a file, it should do that the exact same way whether I ask it to save that file to an ext2 partition, an ext3 partition, a reiserfs partition, etc. What would make ext4 an exception? Isn't abstraction of lower-level filesystem details a good thing?
Re:LOL: Bug Report (Score:5, Interesting)
The first question that came to mind when I read that is "why would the average application need to concern itself with filesystem details?"
They don't. Applications just need to concern themselves with the details of of the APIs they use, and the guarantees those APIs do or don't provide.
The POSIX file APIs specify quite clearly that there is no guarantee that your data is on the disk until you call fsync(). The problem is with applications that assumed they could ignore what the specification said just because it always seemed to work okay on the file systems they tested with.
Re:LOL: Bug Report (Score:5, Insightful)
The first question that came to mind when I read that is "why would the average application need to concern itself with filesystem details?"
They don't. Applications just need to concern themselves with the details of of the APIs they use, and the guarantees those APIs do or don't provide.
The POSIX file APIs specify quite clearly that there is no guarantee that your data is on the disk until you call fsync(). The problem is with applications that assumed they could ignore what the specification said just because it always seemed to work okay on the file systems they tested with.
Thanks for explaining that. In that case, I salute Mr. Tso and others for telling the truth and not caving in to pressure when they are in fact correctly following the specification. Too often people who are correct don't have the fortitude to value that more than immediate convenience, so this is a refreshing thing to see. Perhaps this will become the sort of history with which developers are expected to be familiar.
I imagine it will take a lot of work but at least with Free Software this can be fixed. That's definitely what should happen, anyway. There are sometimes when things just go wrong no matter how correct your effort was; in those cases, it makes sense to just deal with the problem in the most hassle-free manner possible. This, however, is not one of those times. Thinking that you can selectively adhere to a standard and then claim that you are compliant with that standard is just the sort of thing that really should cause problems. Correcting the applications that made faulty assumptions is therefore the right way to deal with this, daunting and inconvenient though that may be.
Removing this delayed-allocation feature from ext4 or placing limits on it that are not required by the POSIX standard is definitely the wrong way to deal with this. To do so would surely invite more of the same. It would only encourage developers to believe that the standards aren't really important, that they'll just be "bailed out" if they fail to implement them. You don't need any sort of programming or system design expertise to understand that, just an understanding of how human beings operate and what they do with precedents that are set.
Re:LOL: Bug Report (Score:4, Informative)
Its a fairly typical way of trying to acheive something loosely approximating transactional behavior with respect to updates to the file in question without relying on transactional file system semantics.
Re: (Score:3, Insightful)
Why isn't this being considered as the solution? There are other major OSes have implemented basic atomic transactions in their filesystems successfully, why not Linux?
Re: (Score:3, Informative)
Is writing a new file, and then renaming it over an existing file really a 'typical workload'???
YES!!!!!!
Re:LOL: Bug Report (Score:5, Insightful)
1) Modern filesystems are expected behave better than POSIX demands.
2) POSIX does not cover what should happen in a system crash at all.
3) The issue is not about saving data, but the atomicity of updates so that either the new data or the old data would be saved at all times.
4) fsync is not a solution, because ir forces the operation to complete *now*, which is counterproductive to write performance, cache coherence, laptop battery life, excessive SSD wear and a bunch of other reasons.
We don't need reliable data-on-disk-now, we need reliable old-or-new data without using a sledgehammer of fsync.
Bollocks (Score:3, Interesting)
A filesystem is not a Database Management System. It's purpose is to store files. If you want transactions, use a DBMS. There are plenty out there which use fsync correctly. Try SQLite.
Re:LOL: Bug Report (Score:5, Insightful)
1) Modern filesystems are expected behave better than POSIX demands.
2) POSIX does not cover what should happen in a system crash at all.
3) The issue is not about saving data, but the atomicity of updates so that either the new data or the old data would be saved at all times.
4) fsync is not a solution, because ir forces the operation to complete *now*, which is counterproductive to write performance, cache coherence, laptop battery life, excessive SSD wear and a bunch of other reasons.
We don't need reliable data-on-disk-now, we need reliable old-or-new data without using a sledgehammer of fsync.
1. POSIX is an API. It tries not to force the filesystem into being anything at all. So, for instance, you can write a filesystem that waits to do writes more efficiently to cut down on the wear of SSDs.
2. Ext3 has a max 5 second delay. That means this bug exists in Ext3 as well.
3. If you have important data that if not written to the hard drive will cause catastrophic failure, then you use the part of the API that forces that write.
4. Atomicity does not guarantee the filesystem be synchronized with cache. It means that during the update no other process can alter the affected file and that after the update the change will be seen by all other processes.
We don't need a filesystem that sledgehammers each and every byte of data to the hard drive just in case there is a crash. What we DO need is a filesystem that can flexibly handle important data when told it is important, and less important data very efficiently.
What you are asking is that the filesystem be some kind of sentient all knowing being that can tell when data is important or not and then can write important data immediately and non-important data efficiently. I think that it is a little better to have the application be the one that knows when it's dealing with important data or not.
Re:LOL: Bug Report (Score:4, Informative)
3. If you have important data that if not written to the hard drive will cause catastrophic failure, then you use the part of the API that forces that write.
You completly missed the point. The new data isn't important, it could be lost and nobody would care. The troublesome part is that you lose the old data too. If you would lose the last 5 minutes of changes in your KDE config that would be a non-issue, what however happens is that you not just lose the last few changes, but your complete config, it ends up as 0 byte files, which is a state that the filesystem never had.
Re:LOL: Bug Report (Score:5, Insightful)
You don't understand the problem.
You are wrong when you say EXT3 has this problem. It does not have it. If the EXT3 system crashes during those 5 seconds, you either get the old file or the new one. For EXT4, if it crashes, you can get a zero-length file, with both the old and new data lost.
The long delay is irrelevant and is confusing people about this bug. In fact the long delay is very nice in EXT4 as it means it is much more efficient and will use less power. I don't really mind if a crash during this time means I lose the new version of a file. But PLEASE don't lose the old one as well!!! That is inexcusable, and I don't care if the delay is .1 second.
Re:LOL: Bug Report (Score:4, Interesting)
Yes I would like that as well. It would remove the annoying need to figure out a temp filename and to do the rename.
One suggestion was to add a new flag to open. I think it might also work to change O_CREAT|O_TRUNC|O_WRONLY to work this way, as I believe this behavior is exactly what any program using that is assuming.
f = creat(filename) would result in an open file that is completely hidden to any process. Anybody else attempting to open filename will either get the old file or no file. This should be easy to implement as the result should be similar to unlinking an already-opened file.
close(f) would then atomically rename the hidden file to filename. Anything that already has filename open would keep seeing the old file, anything that opens it afterwards will see the new file.
If the program crashes without closing the file then the hidden file goes away with no side effects. It might also be useful to have a call that does this, so a program could abandon a write. Not sure what call to use for that.
Calling fsync(f) would act like close() and force the rename, so after fsync it is exactly like current creat().
Re:LOL: Bug Report (Score:5, Informative)
They don't. Applications just need to concern themselves with the details of of the APIs they use, and the guarantees those APIs do or don't provide.
Yup, and the problem has existed with KDE startup for years. I remember the startup files getting trashed when Mandrake first came out and I tried KDE for long enough to get hooked, and it's happened to me a few times a year ever since with every filesystem I've used. I just make my own backups of the .kde directory and fix this manually when it happens. I'm pretty good at this restore by now. Hopefully this bug in KDE will get fixed now that it is causing the KDE project such great embarrassment. I had a silent wish Tso would increase the default commit interval to 10 minutes when the first defenders of the KDE bug started squawking, but he's was too gracious for that.
PS I use a lot of experimental graphics drivers for work, hence lockups during startup are common enough that I probably see this KDE bug more than most KDE users. But they really violate every rule of using config files: 1st. open with minimum permission needed, in this case read only, unless a write is absolutely necessary. 2nd. only update a file when it needs updating. 3rd. when updating a config file make a copy, commit it to disk, and then replace the original, making sure file permissions and ownership are unchanged, then commit the rename if necessary.
PS2 Those computer users saying an fsync will kill performance need to get cluebat applied to them by the nearest programmer. 1st. There will be no fsyncs of config files at startup once the KDE startup is fixed. 2nd. fsyncs on modern filesystems are pretty fast, ext3 is the rare exception to that norm; this will be non-noticable when you apply a settings change. 3rd. These types of programming errors are not the norm; I've graded first and second year computer science classes and each of the three major mistakes made would have lost you 20-30% of your score for the assignment.
Re:LOL: Bug Report (Score:5, Insightful)
fsyncs have other nasty side effects other than performance. For example, in Firefox 3, places.sqlite is fsynced after every page is loaded. For a laptop user, this behavior is unacceptable as it prevents the disks from staying spun down (not to mention the infuriating whine it creates to spin the disk up after every or nearly every page load). The use of fsync in Firefox 3 has actually caused some people (myself included), to mount ~/.mozilla as tmpfs and just write a cron job to write changed files back to disk once every 10 minutes.
So, while I'm all for applications using fsync when it's really needed, the last thing I'd like to see every application on the planet sprinkling their code with fsync "just to be sure".
Re:LOL: Bug Report (Score:5, Interesting)
KDE isn't fixed right now. Additionally, KDE is not the only application that generates lots of write activity. I work with real-time systems, and write performance on data collection systems is important.
I did some benchmarks on the ext3 file system, the ext4 system without the patch, and the ext4 system with the patch. Code followed the open(), write(), close() sequence was 76% faster than the code with fsync(). Code that followed the open(), write(), close(), rename() sequence was 28% faster than code with that followed the open(), write(), fsync(), close(), rename() sequence. Additionally, the benchmarks were not significantly affected by the presence which file system was used (ext3, ext4, or ext4 patched.) You can look up the spreadsheet and the discussion at the launchpad discussion. [launchpad.net]
Major Linux file backup utilities, like tar, gzip, and rsync don't use fsync as part of normal operations. The only application of the three, tar, that uses fsync, only uses it when verifying data is physically written to disk. In that situation, it writes the data, calls fsync, calls ioctl(FDFLUSH), and the reads the data back. Strictly speaking, that is the only way to make sure the file is written to disk, and is readable.
Finally, as Theodore Ts'o has pointed out, if you really want to make sure the file is saved to disk, you also have to fsync() the directory too. I have never seen anyone do that, as part of a normal file save. Most C programming textbooks simply have fopen, fwrite, fclose as the recommended way to save files. Calling fsync this often is unusual for most C programmers.
I would hate to be in your programming class. Your enforcing programming standards that aren't followed by key Linux utilities, aren't in most textbooks, and aren't portable to non-Linux file systems.
If you require your students to fsync() the file and the directory, as part of a normal assignment, you are requiring them to do things that aren't done by any Linux utility out there. Further, if you are that paranoid, you better follow the example from the tar utility, and after the fsync completes, read all the data back to verify it was successfully written.
Re:LOL: Bug Report (Score:5, Interesting)
From the explanations I received and some reading I've done, I don't think the data is just getting "thrown away" so that isn't really a valid question. The issue seems to be that unless fsync is called, the changes requested by the application may happen in a sequence that is other than what the application programmer expected. The example I saw in this discussion involved first writing data to a file and then renaming it soon afterwards. If I understand this correctly, the application is assuming that the rename cannot possibly happen before the writing of the data is done even though the specification has no such requirement. If the application needs this to happen in the order in which it was requested, it needs to write the data, then call fsync, then rename the file. You could probably fill a library with what I don't know about low-level filesystem details, so please correct me if I have misunderstood this.
The example I found in the Wikipedia entry on ext4 [wikipedia.org] was different. That one involved data loss because the application updates/overwrites an existing file and does not call fsync and then the system crashes. The Wiki article states that this leads to undefined behavior (which, afaik, is correct per the spec). The article also states that a typical result is that the file was set to zero-length in preparation for being overwritten but because of the crash, the new data was never written so it remains zero-length, causing the loss of the old version of the file. Under ext3 you would usually find either the old version of the file or the new version.
What I don't understand and hope that a more knowledgable person could explain is why this can't be done a slightly different way. This is where I can apply reason to come up with something that sounds preferable to me but I simply don't have the background knowledge of filesystems to understand the "why". If the overwrite of the file is delayed, why isn't the truncation of the file to zero-length also delayed? That is, instead of doing it this way:
Step 1: Truncate file length to zero in preparation of overwriting it.
Step 2: Delay the writing of the new data for performance reasons.
Step 3: After the delay has elapsed, actually write the data to the disk.
Why can't it be done this way instead?
Step 1: Delay the truncation of the file length to zero in preparation of overwriting it.
Step 2: Delay the writing of the new data.
Step 3: After the delay has elapsed, set the file length to zero and immediately write the new data, as a single operation if that is possible, or as one operation immediately followed by the other.
That way if there is a crash, you'd still get either the old version or the new one and not a zero-length file where data used to be. The only disadvantage I can see is that this might continue to enable developers to make assumptions that are not found in the standard because the buggy behavior ext4 is now exposing may continue to work. If there's no technical reason why it cannot be done that way, perhaps the bad precedent alone is a good reason to either not handle it this way or to change the spec.
Re: (Score:3, Insightful)
Glossing over some details, what is happening is closer to this:
The goal is to replace config with a new version. The programmer is essentially doing this:
The goal is that when you replace config, you're replacing it with a guaranteed complete version, config.new. Assuming it happens in this order (and that step 3 is atomic; it happens or doesn't, never partially) if you
Re: (Score:3, Informative)
br/> The mechanism (fsync) has been around for ages, it's just tha
Re: (Score:3, Informative)
You don't risk any data loss, ever, if you shut down your system properly. The system will sync the data to disk as expected and everything will be peachy. You risk data loss if you lose power or otherwise shut down at an inopportune time and the data hasn't been sync'd to disk yet.
That is to say, 99% of people who use their computers properly won't have a problem.
Also note, the software you use should be doing something like:
loop: write some data, write some more data, finish writing data, fsync the data
Re: (Score:3, Informative)
ARRGH! This has nothing to do with the data being written "soon".
The problem with EXT4 is that people expect the data to be written before the rename!
Fsync() is not the solution. We don't want it written now. It is ok if the data and rename are delayed until next week, as long as the rename happens after the data is in the file!
Re:LOL: Bug Report (Score:5, Insightful)
Rubbish. Sorry, if the syncs were implicit, app developers would just be demanding a way to to turn them off most of the time because they were killing performance.
Re:LOL: Bug Report (Score:4, Insightful)
This is actually even stupider for flash drives. There is essentially zero seek time on a flash drive, so, in theory, it shouldn't really matter how much you write at any given time(since hte only delay should be how long it takes to actually write the cell).
In addition, presuming reasonable wear algorithms(which should be implemented in the device controller not in any sort of software), every bit of Math I've seen says that for any realistic amount of data writes the flash drives will last substantially longer than any current physical drives(last I saw it was about 30 years if you wrote every sector on the disk once a day, scaling down as writes increase. Even writing 6 times the volume of the drive per day that's 5 years which is a fairly long time for consumer grade physical drives, and unlike a physical drive, even if you can't read it, you can write it so you can just clone it over to a new drive.
File systems will definitely have to change for flash drives, but delaying writes probably isn't going to be the way to do it, especially since there's no need to do so.
Re:LOL: Bug Report (Score:4, Informative)
Re:LOL: Bug Report (Score:5, Interesting)
The POSIX standard is just fine. The problem is application assumptions that aren't up to snuff.
Read the qmail source code sometime. Every time the author wants to assure himself that data has been written to the disk, it calls fsync.
If you don't, you risk losing data. Plain and simple.
Re:LOL: Bug Report (Score:5, Funny)
Microsoft Patent
Re: (Score:2)
If an application decides to check the name of the file system and if the name is "ext4" it erases everything in your home directory, should that be considered a file system bug too?
No, I'd call that malice.
Re: (Score:3, Insightful)
Ext4 is still alpha-ish, and declared as such.
Any *user* who trusts production data to an experimental filesystem is already too stupid to have the right to gripe about losing said data.
Re: (Score:2, Informative)
Anyhow, ZFS is raid, lvm, and fs rolled up into one, so keeping the patch up to date with linux changes could be a bit of work.
Re:LOL: Bug Report (Score:5, Insightful)
The application programmers aren't at fault here, the POSIX spec is. A filesystem is essentially a hierarchical database, yet POSIX doesn't include a way to make atomic updates to it. The only tool provided is fsync, which kills performance if used. And even with fsync some things - such as rewriting a configuration file - are either outright impossible or complex and fragile.
The real solution is to come up with a transactional API for filesystem. Until that's done, problems like this will persist. Calling fsync - which forces a disk write - or playing around with temporary files isn't reasonable when all you want to do is make sure that the file will be updated properly or left alone.
The alternative is to have every program call fsync constantly, which not only kills performance, but ironically enough also negates some of Ext4's advantages, such as delayed block allocation, since it essentially disables write caching. And it doesn't work if you are doing more complex things, such as, say, mass renaming files in a directory; you have no way of ensuring that either they are all renamed, or none are.
Those who fail to learn the lessons of history.... (Score:5, Insightful)
FTFA, this is the problem:
Ext4, on the other hand, has another mechanism: delayed block allocation. After a file has been closed, up to a minute may elapse before data blocks on the disk are actually allocated. Delayed block allocation allows the filing system to optimise its write processes, but at the price that the metadata of a newly created file will display a size of 0 bytes and occupy no data blocks until the delayed allocation takes place. If the system crashes during this time, the rename() operation may already be committed in the journal, even though the new file still contains no data. The result is that after a crash the file is empty: both the old and the new data have been lost.
And now my question: Why did the Ext4 developers make the same mistakes Reiser and XFS both made (and later corrected) years ago? Before you get to write any filesystem code, you should have to study how other people have done it, including all the change history. Seriously.
Those who fail to learn the lessons of [change] history are doomed to repeat it.
Re: (Score:2, Insightful)
This problem is just something that slipped through the cracks and I'm sure the originator of this bug is kicking himself in the
Re: (Score:3, Insightful)
Before you get to write any filesystem code, you should have to study how other people have done it...
No. Being innovative means being original, and that means taking new and different paths. Once you have seen somebody else's path, it is difficult to go out on your own original path. That is why there are alpha nad beta stages to a project, so that watchful eyes can find the mistakes that you will undoubtedly make, even those that have been made before you.
Re: (Score:2)
Making the same mistakes someone else made is NOT being innovative, it's being stupid or ignorant... or a number of other predicate adjectives.
Innovation is using something in a new way, not making the same mistake in a new way. That's still considered a mistake, and if it can be shown that you should have known about the mistake from someone else making it, you're still "making the same mistake" and not "innovating." Not to say you're not going to make mistakes and not know everything, but it's still a
Re: (Score:3, Insightful)
Standing on the shoulders of giants is usually the best way to make progress.
Sure, if the only direction you want to go is the direction that the giant is already moving. Doesn't help you get anywhere else, though.
Re: (Score:2)
only so long as they apply to the direction you are going. Not all mistakes by others apply to every direction - in fact, most probably don't.
That doesn't mean that "Lesson's Learned" are useful - just not always applicable.
No kidding (Score:5, Insightful)
All the stuff with Ext4 strikes me as amazingly arrogant, and ignorant of the past. The issue that FS authors, well any authors of any system programs/tools/etc need to understand is that your tool being usable is the #1 important thing. In the case of a file system, that means that it reliably stores data on the drive. So, if you do something that really screws that over, well then you probably did it wrong. Doesn't matter if you fully documented it, doesn't matter if it technically "follows the spec" what matters is that it isn't usable.
I mean I could write a spec for a file system that says "No write is guaranteed to be written to disk until the OS is shut down, everything can be cached in RAM for an indefinite amount of time." However that'd be real flaky and lead to data loss. That makes my FS useless. Doesn't matter if it is well documented, what matters is that the damn thing loses data on a regular basis.
I'd give these guys more credit if I was aware of any other major OS/FS combo that did shit like this, but I'm not. Linux/Ext3 doesn't, Windows/NTFS doesn't, OS-X/HFS+ doesn't, Solaris/ZFS doesn't, etc. Well that tells me something. That says that the way they are doing things isn't a good idea. If it is causing problems AND it is something else nobody else does, then probably you ought not do it.
This is just bad design, in my opinion.
Re: (Score:3, Insightful)
It does store data reliably on the drive that has been properly synchronized by the application's author. This data that is lost is what has been sent to a filehandle but not yet synchronized when the system loses power or crashes.
The FS isn't the problem, but it is exposing problems in applications. If you need your FS to be a safety net for such applications, nobody is taking ext3 away just because ext4 is available. IF you want the higher performance of ext4, buy a damn UPS already.
Re:No kidding (Score:5, Informative)
I don't think you have it right.
On Ext3 with "data=ordered" (a default mount option), if one writes the file to disk, and then renames the file, ext3 will not allow the rename to take place until after the file has been written to disk.
Therefore if an application that wants to change a file uses the common pattern of writing to a temporary file and then renaming (the renaming is atomic on journaling file systems), if the system crashes at any point, when it reboots the file is guaranteed to be either the old version or the new version.
With Ext4, if you write a file and then rename it, the rename can happen before the write. Thus if the computer crashes between the rename and the write, on reboot the result will be a zero byte file.
The fact that the new version of the file may be lost is not the issue. The issue is that both versions of the file may be lost.
The end result is the write and rename method of ensuring atomic updates to files does not work under Ext4.
A new mount option that forces the rename to come after the data is written to disk is being added. Once that is available, the problem will be gone if you use that mount option. Hopefully it will be made a default mount option.
Re: (Score:3, Informative)
But if the application syncs the file, the new data is written to disk.
This wastes time and performance, and for most files is un-needed.
There are not only "important" and "unimportant" files, there are also "typical" files.
We don't want to lose them, but who cares if recent changes are lost.
Take for example a KDE config file. I am willing to risk all changes made to it since boot (I generally leave my computer off at night, so this is 12 or so hours). I do not want to lose all of my changes since install
A bad design that it is used everywhere (Score:5, Informative)
"No write is guaranteed to be written to disk until the OS is shut down, everything can be cached in RAM for an indefinite amount of time." However that'd be real flaky and lead to data loss. That makes my FS useless. Doesn't matter if it is well documented, what matters is that the damn thing loses data on a regular basis.
It turns out that all the modern operative systems work exactly like that. In ALL of them you need to use explicit syncronization (fsync and friends) to get a notification that your data has really been written to disk (and that's all what you get, a notification, because the system could oops before fsync finishes). You also can mount your filesystem as "sync", which sucks.
Journaling, COW/transaction-based filesystems like ZFS only guarantee the integrity, not that your data is safe. It turns out that Ext3 has the same problem, it's just that the window is smaller (5 seconds). And I wouldn't bet that HFS and ZFS have not the same problem (btrfs is COW and transaction based, like ZFS, and has the same problem).
Welcome to the real world...
Re:A bad design that it is used everywhere (Score:5, Informative)
The Ext3 5 seconds thing is true, but that is not the important difference.
On Ext3, with the default mount options, if one writes a file to disk, and then renames the file the write is guarantee to come before the rename. This can be used to ensure atomic updates to files, by writing a temporary copy of the file with the desired changes, and then renaming the file.
On Ext4, if one writes a file to the disk, and then renames the file, the rename can happen first. The result of this is that it is not possible to ensure atomic updates to files unless one uses fsync between the writing and the renaming. However, that would hurt performance, since fsync will force the file to be committed to disk right now, when all that is really important is that it is committed to disk before the rename is.
Thankfully the Ext4 module will be gaining a new mount option that will ensure that a file is written to disk before the renaming occurs. This mount option should have no real impact on performance, but will ensure the atomic update idiom that works on Ext3 will also work on Ext4.
Re: (Score:3, Insightful)
There's a ton of software out there that uses the "write to new file with temporary name, then rename it to the final name" pattern, much of it written before Ext4 (or Ext3, or Ext) was designed, and rather a lot of it written before most of the folks on the Linux Kernel mailing list were even out of elementary school. This is a well-established method for reliably updating files, and it works, or fails gracefully, on almost every filesystem implementation from 1976 to the present day - except for Ext4.
Clai
Re: (Score:3, Informative)
The issue that FS authors, well any authors of any system programs/tools/etc need to understand is that your tool being usable is the #1 important thing.
Part of usability is performance. This is a significant performance improvement.
So, if you do something that really screws that over, well then you probably did it wrong. Doesn't matter if you fully documented it, doesn't matter if it technically "follows the spec" what matters is that it isn't usable.
The real problem here is that application developers were relying on a hack that happened to work on ext3, but not everywhere else.
Let me ask you this -- should XFS change the way it behaves because of this? EXT4 is doing exactly what XFS has done for decades.
I mean I could write a spec for a file system that says "No write is guaranteed to be written to disk until the OS is shut down, everything can be cached in RAM for an indefinite amount of time." However that'd be real flaky and lead to data loss.
No, that's actually precisely what the spec says, with one exception: You can guarantee it to be written to disk by calling fsync.
I'd give these guys more credit if I was aware of any other major OS/FS combo that did shit like this, but I'm not.
Only because you haven't looked.
In fact, t
Re: (Score:3, Insightful)
I just posted in the wrong thread. Synopsis:
I made a lot of money back in the 90's repairing NTFS installs. The similarity with it, back then, and EXT4 is they are/were young file systems.
Give Ted and company a break. Let him get the damn thing fixed up (I have plenty of faith in Ted). Hell, I even remember losing an EXT3 file system back when it was fresh out of the gate. And I'm sure there's plenty who could say the same for all those you listed, including ZFS.
And your comment about extended data caching.
Re:Those who fail to learn the lessons of history. (Score:2, Informative)
As explained in the article - he hasn't made a mistake. The behaviour of ext4 is perfectly compatible with the POSIX standard.
man fsync
Re: (Score:3, Interesting)
Ext4 *is* better [phoronix.com], and probably because it benefits from the wiggle room provided by the specifications. The question is if you accept the tradeoff between performance and security. I choose performance, because my system doesn't crash that often.
Re: (Score:3, Insightful)
A few percent performance difference will be easily wiped away when the filesystem erases an important file that one time a year when a snowstorm knocks your power out.
Re: (Score:3, Interesting)
Re:Those who fail to learn the lessons of history. (Score:3, Funny)
They tried to, but history was just a 0-byte file
Re: (Score:2)
rename completes before the write (Score:5, Insightful)
Ext4 developer Ted Ts'o stresses in his answer to the bug report that Ext4 behaves precisely as demanded by the POSIX standard for file operations.
I couldn't disagree more:
When applications want to overwrite an existing file with new or changed data [...] they first create a temporary file for the new data and then rename it with the system call - rename(). [...] Delayed block allocation allows the filing system to optimise its write processes, but at the price that the metadata of a newly created file will display a size of 0 bytes and occupy no data blocks until [up to 60 seconds later].
Application developers reasonably expect that writes to the disk which happen far apart in time will happen in order. If I write to a file and then rename the file, I expect that the rename will not complete significantly before the write. Certainly not 60 seconds before the write. It seems dead obvious, at least to me, that the update of the directory entry should be deferred until after ext4 flushes that part of the file written prior to the change in the directory entry.
Re:rename completes before the write (Score:4, Interesting)
behaves precisely as demanded by the POSIX standard
Application developers reasonably expect
Apples and oranges. POSIX != "what app developers reasonably expect".
Of course you have a point insofar as that just pointing to POSIX and saying it's a correct implementation of the spec is not enough, but let's be clear here that one of these things is not like the other.
Re: (Score:3, Informative)
That is sounds like a reasonable assumption but it is certainly not reasonable to write code that depends on that. 60 seconds is an eternity for a computer, but so is a second. Therefore the fact that 60 seconds is much longer
Re:rename completes before the write (Score:4, Insightful)
The right time being the hundredths of a second between the commit of the file data and the commit of the directory data, not 60 seconds.
And if you fsync'd, the right time would be zero, on either ext3 or ext4. Or XFS, for that matter.
If I fsync after every write, I can get reliability in ext2.
No you can't. Reliability in ext2 would force you to sync not just your file, but whole directory structures -- and even then, you'd only be safe until something else starts writing.
I put up with the performance hit from ext3 and ext4 because I want the reliability in the filesystem instead of having to build it into every part of every application.
Too late.
All the journaling guarantees is that if you lose power, you won't have to fsck -- you'll get a filesystem which is internally consistent. Oh, and it also guarantees that you won't see circular directory entries, or an entire directory falling off the face of the planet, and other nastiness.
Whether it's consistent with respect to your application is completely outside the scope of the FS journaling, and is the responsibility of your application. Put it in a library, use a database, whatever -- but it's not the filesystem's fault that you failed to read the spec, nor is it very smart of you to code to ext3 instead of POSIX.
Re: (Score:3, Informative)
It is not about losing data of the write due, it is about losing data already written, by completing the operations in a different order as issued.
the workaround is bad design (Score:3, Insightful)
Short version: "We're sorry we changed something that worked and everyone was used to, but hey -- it's compliant with a standard." If this were Microsoft, we'd give them a healthy helping of humble pie, but because it's Linux and the magic word "POSIX" gets used, I'm sure we'll forgive them for it. The workaround is laughable -- "call fsync(), and then wait(), wait(), wait(), for the Wizard to see you." How about writing a filesystem that actually does journaling in a reliable fashion, instead of finger-pointing after the user loses data due to your snazzy new optimization and say "The developer did it! It wasn't us, honest." Microsoft does it and we tar and feather them, but the guys making the "latest and greatest" Linux feature we salute them?
We let our own off with heineous mistakes while professionals who do the same thing we hang simply because they dared to ask to be paid for their effort. Lame.
Re:the workaround is bad design (Score:5, Funny)
But... those of us who learned the Ancient And Most Wise ways always triple-sync. We also sacrifice Peeps and use red food colouring in voodoo ceremonies (hey, it really is blood, so it should work) to keep the hardware from failing.
On next week's Slashdot, there will be a brief tutorial on the right way to burn a Windows CD at the stake, and how to align the standing stones known as RAM Chips to points of astronomical significance.
Re: (Score:2, Interesting)
No, we don't salute them. If you ask me, now matter what Ted T'so says about it complying with the POSIX standard, sorry, but it's a bug if it causes known, popular applications to seriously break, IMHO.
Broken is broken, whether we're talking about Ted T'so or Microsoft.
Dunno (Score:5, Insightful)
but if you want a write later file system shouldn't it be restricted to hardware that can preserve it?
I understand that doing writes immediately when requested leads to performance degradation but that is why business systems which defer writes to disk only do so when the hardware can guarantee it. In other words, we have a battery backed cache, if the battery is low or nearing end of life the cache is turned off and all writes are made when the data changes.
Trying to make performance gains to overcome limitations of the hardware never wins out.
Re: (Score:3, Informative)
Without write-back (that's delaying writes until later and keeping them in a cache), you lose elevator sorting. No elevator sorting makes heavy drive usage ridiculously slower than with.
You can't re-sort and organize your disk activity without the ability to delay the data in a pool.
The difference between EXT3 and EXT4 is not whether the data gets written immediately -- neither do that. The difference is how long they wait. EXT4 finally gives major power preservation by delaying writes until necessary so
Re: (Score:3, Insightful)
Excuse my language, but why the fuck are those "common desktop programs" writing and renaming
Re: (Score:3, Insightful)
Re: (Score:2)
wait(), wait(), wait(), for the Wizard to see you
There's no place like /home. /home. /home.
There's no place like
There's no place like
Re: (Score:2, Informative)
We let our own off with heineous mistakes while professionals who do the same thing we hang simply because they dared to ask to be paid for their effort. Lame.
Is Ted Ts'o not professional? Does he not get paid? Ts'o's employed by the Linux Foundation, on leave from IBM. Free Software does not mean volenteer-made software!
Re:the workaround is bad design (Score:4, Insightful)
Actually, no.
Microsoft runs a proprietary show where they 'set the standard' themselves. Which basically means 'there is no standard except how we do it'.
Linux, however, tries to adhere to standards. When it turns out that something doesn't adhere to standards, it gets fixed.
Another problem is that most users of proprietary software on their proprietary OS don't have the sources to the software they use, so if the OS fixes something that was previously broken, but the software version used is 'no longer supported' the 'fix' in the OS breaks the users' software and the user has no option of fixing his software.
THIS is why a) microsoft can't ever truly fix something and b) why using proprietary software screws over the user.
Or would you rather have OSS software do the same as proprietary software vendors and work around problems forever but never fixing them? Saw that shiny 'run in IE7 mode' button in IE8? that's what you'll get...
Re:the workaround is bad design (Score:5, Insightful)
If this were Microsoft, we'd give them a healthy helping of humble pie, but because it's Linux and the magic word "POSIX" gets used, I'm sure we'll forgive them for it.
You must be reading a different slashdot than I am. The popular opinion I see is that this is very bad design. If the spec allows this behavior, it's time to revisit the spec.
Re: (Score:2)
If Microsoft simultaneously sacrificed backwards compatibility and correctly implemented a standard, we'd probably be left completely speechless.
voting (Score:4, Funny)
So is this why we can't have voting (where correctness is paramount over performance) systems developed on Linux?
Re: (Score:2)
What? When Microsoft made IE more standards-compliant, everyone was happy even if it broke legacy applications/sites.
You, sir, are making no sense.
If Microsoft broke stuff to make their OS POSIX compliant, we'd all be really happy!
Re: (Score:3, Informative)
The "workaround" is understanding how the platform you're targeting actually works rather than making guesses. fsync() and even fdatasync() have been around for ages and are documented. *NIX directories have always just been more or less lists of (name,inode_no) tuples, which is why hard links are part of the platform. There isn't really any magical connection between an inode and the directories it happens to be listed in.
Ted knows this stuff inside and out and is almost ridiculously reasonable compared to
Re: (Score:3, Insightful)
"The workaround is laughable -- 'call fsync(), and then wait(), wait(), wait(), for the Wizard to see you.'"
The "workaround" has been the standard for decades! Twenty years ago when I was learning programming I was warned: Until you call fsync(), you have no guarantee that your data has landed on disk. If you want to be sure the data is on the disk, call fsync(). While it's a complication for application developers, the benefit is that it allows filesystem developers to make the filesystem faster. Tha
Show some respect! (Score:5, Funny)
That's General Ts'o to you!
Re: (Score:2, Funny)
"what's a matter, colonel? CHICKEN?"
sorry.
I sit just me? (Score:3, Insightful)
I sit just me, or would you expect that the change would only be committed once the data was written to disk under all circumstances?
To me, it sounds like somebody screwed up a part of the POSIX specification. I should look for the line that says "During a crash, loose the user's recently changed file data and wipe out the old data too."
IMarv
Re: (Score:3, Funny)
Nope, not just you, I sit also.
Re: (Score:2)
I sit is it? Hmm.
IMarv
POSIX spec is fine, ext4 is flawed (Score:3, Informative)
Someone above says that the POSIX standard is fine, but that ext4 violates it. Here is his quote:
"When applications want to overwrite an existing file with new or changed data [...] they first create a temporary file for the new data and then rename it with the system call - rename("
It seems that ext4 renames the file first, and then writes the file up to 60 seconds later.
Re: (Score:2)
No, POSIX doesn't garantee write before you do a fsync, an added rename doesn't change this.
This situation is identitical to read&write memory ordering: due to a cache, different CPU may see different value of a variable.
Different architecture has different limitation on the way to reorganise read and write, with x86 it's not too bad but with the Alpha which can truly reorganise thing a lot it becomes very difficult to put all the needed memory barriers.
IMHO, there is performance / usability tradeoff he
Workaround is disaster for laptops (Score:5, Insightful)
The workaround (flushing everything to disk before the rename) is a disaster for laptops or anything else which might wish to spin down a disk drive.
The write-replace idiom is used when a program is updating a file and can tolerate the update being lost in a crash, but wants either the old or the new to be intact and uncorrupted. The proposed sync solution accomplishes this, but at the cost of spinning up the drive and writing the blocks at each write-replace. How often does your browser update a file while you surf? Every cache entry? Every history entry? What about your music player? Desktop manager? All of these will be spin up your disk drive.
Hiding behind POSIX is not the solution. There needs to be a solution that supports write-replace without spinning up the disk drive.
The ext4 people have kindly illuminated the problem. Now it is time to define a solution. Maybe it will be some sort of barrier logic, maybe a new kind of sync syscall. But it needs to be done.
Re: (Score:3, Insightful)
If the issue is drive spin-up, how have the new generation of flash drives been taken into account? It seems to me that rotational drives are on their way out.
That doesn't do anything for the contemporary generations of laptop, but what would the ramifications be for later ones?
Re: (Score:3, Informative)
There needs to be a solution that supports write-replace without spinning up the disk drive.
How do you intend on writing to the disk drive... without spinning it up? Is this not what you're asking? If this is indeed your question, the answer is already "by using a battery backed cache".
BBH
Re:Workaround is disaster for laptops (Score:5, Informative)
Fixed code:
fwrite()
fsync() - sync this file before close
fclose()
rename()
Either you're a troll or an idiot, since you're AC'ing I guess I got trolled. This will sync immidiately and kill performance and battery life, since every block must be confirmed written before the process can continue. What you need to fix this is a delayed rename that happens after the delayed write.
Problem:
fwrite()
fclose()
rename()
*ACTUAL RENAME*
*TIME PASSES* <-- crash happens here = lose old file
*ACTUAL WRITE*
Real solution:
fwrite()
fclose()
rename()
*TIME PASSES* <-- crash happens here = keep old file
*ACTUAL WRITE*
*ACTUAL RENAME*
Re: (Score:3, Informative)
And you don't get it... The truth is that Ext4 was writing the journal out before any changes took place. This means that when the crash happens between the metadata write and the actual write a replay of the journal will cause data loss.
Other filesystems with delayed allocation solve this by not writing the journal before the actual data commits happen. The fix that TFA is talking about introduces this to Ext4.
Re:Workaround is disaster for laptops (Score:4, Informative)
In which case the standard sucks, big time, and finding a loophole that trashes normal expected behavior should not be cause for rejoicing.
There needs to be a way to write a file such that either the old or the new is preserved. Agreed on this?
Now, in a file system that's going to run real well, there needs to be a way to delay writes in order to batch them. Agreed on this?
We have two reasonable demands here. Pick one, because that's all you're going to get.
Currently, in order to keep either the old or new file, it's necessary to write the new file right now. This is the standard behavior, and it trashes performance. Alternatively, the writes can be batched up for later, for good performance, and we run the risk of losing both old and new versions of a file.
In other words, in order to optimize the heck out of the file system, it's necessary to trash the performance.
What we need is a way to do the rewrite-rename thing in a way so it can be safely delayed, so the file system can batch up a lot of writes to do in a really fancy optimized way, but writing the new file fully before renaming it. There's no obvious reason to me why the file system can't keep track of this and guarantee the order. It may not be required by the standard, but that's no excuse for not implementing it.
Quick workaround - no patches required (Score:5, Informative)
The odd thing is... (Score:3, Insightful)
I'm a hobbyist, and I don't program system level stuff, essentially, at all anymore, but way back when I did do C programming on Linux (~10 years ago), ISTR that this (from Ts'o in TFA) was advice you couldn't go anywhere without getting hit repeatedly over the head with:
Is this really something that is often missed in serious applications?
Re: (Score:3, Informative)
Calling fsync() excessively completely trashes system performance and usability. Essentially, operating systems have write back caches to speed code execution. fsync() disables the write back cache by writing data out immediately, and making your program wait while the flush happens. Modern computers can do activities that involve rapidly touching hundreds of files per second. Forcing each write to use an fsync() slows things down dramatically, and makes for a poor user experience.
To make matters wors
Bad POSIX (Score:5, Interesting)
Ext4, on the other hand, has another mechanism: delayed block allocation. After a file has been closed, up to a minute may elapse before data blocks on the disk are actually allocated. Delayed block allocation allows the filing system to optimise its write processes, but at the price that the metadata of a newly created file will display a size of 0 bytes and occupy no data blocks until the delayed allocation takes place. If the system crashes during this time, the rename() operation may already be committed in the journal, even though the new file still contains no data. The result is that after a crash the file is empty: both the old and the new data have been lost.
Ext4 developer Ted Ts'o stresses in his answer to the bug report that Ext4 behaves precisely as demanded by the POSIX standard for file operations.
If that is true, then to the extent that is true, POSIX is "broken". Related changes to a file system really need to take place in an orderly way. Creating a file, writing its data, and renaming it, are related. Letting the latter change persist while the former change is lost, is just wrong. Does POSIX really require this behavior, or just allow it? If it requires it, then IMHO, POSIX is indeed broken. And if POSIX is broken, then companies like Microsoft are vindicated in their non-conformance.
Easier Fix (Score:4, Insightful)
Why not just make the actual "flushing" process work primarily on memory cache data - including any "renames", "deletes", etc.?
If any "writes" are pending, then the other operations should be done in the strict order in which they were requested. There should be no pattern possible where cache and file metadata can be out of sync with one another.
POSIX (Score:4, Insightful)
Sounds like they need to talk to Kirk McKusick (Score:5, Informative)
Kirk McKusick spent a lot of time working out the right order to write metadata and file data in FFS and the resulting file system, FFS with Soft Updates, gets high performance and high reliability... even after a crash.
Workaround patches already in Fedora and Ubuntu (Score:5, Informative)
It's really depressing that there are so many clueless comments in Slashdot --- but I guess I shouldn't be surprised.
Patches to work around buggy applications which don't call fsync() have been around long before this issue got slashdotted, and before the Ubuntu Laundpad page got slammed with comments. In fact, I commented very early in the Ubuntu log that patches that detected the buggy applications and implicitly forced the disk blocks to disk were already available. Since then, both Fedora and Ubuntu are shipping with these workaround patches.
And yet, people are still saying that ext4 is broken, and will never work, and that I'm saying all of this so that I don't have to change my code, etc ---- when in fact I created the patches to work around the broken applications *first*, and only then started trying to advocate that people fix their d*mn broken applications.
If you want to make your applications such that they are only safe on Linux and ext3/ext4, be my guest. The workaround patches are all you need for ext4. The fixes have been queued for 2.6.30 as soon as its merge window opens (probably in a week or so), and Fedora and Ubuntu have already merged them into their kernels for their beta releases which will be released in April/May. They will slow down filesystem performance in a few rare cases for properly written applications, so if you have a system that is reliable, and runs on a UPS, you can turn off the workaround patches with a mount option.
Applications that rely on this behaviour won't necessarily work well on other operating systems, and on other filesystems. But if you only care about Linux and ext3/ext4 file systems, you don't have to change anything. I will still reserve the right to call them broken, though.
The problem is/was in the EXT3 in the first place! (Score:3, Informative)
The POSIX specifies that closing a file does not force it to permanent storage. To get that, you MUST call fsync() [manpagez.com] .
So the required code to write a new file safely is:
The is no performance problem because fsync(fd) syncs only the requested file. However, that's in theory... use EXT3 and you'll quickly learn that fsync() is only able to sync the whole filesystem - it doesn't matter which file you ask it to sync, it will always sync the whole filesystem! Obviously that is going to be really slow.
Because of this, way too many software developers have dropped the fsync() call to make the software usable (that is, not too slow) with EXT3. The correct fix is to change all the broken software and in the process that will make EXT3 unusable because of slow performance. After that EXT3 will be fixed or it will be abandoned. An alternative choice is to use fdatasync() instead of fsync() if the features of fdatasync() are enough. If I've understood correctly, EXT3 is able to do fdatasync() with acceptable performance.
If any piece of software is writing to disk without using either fsync() or fdatasync() it's basically telling the system: the file I'm writing is not important, try to store it if you don't have better things to do.