Apps That Rely On Ext3's Commit Interval May Lose Data In Ext4 830
cooper writes "Heise Open posted news about a bug report for the upcoming Ubuntu 9.04 (Jaunty Jackalope) which describes a massive data loss problem when using Ext4 (German version): A crash occurring shortly after the KDE 4 desktop files had been loaded results in the loss of all of the data that had been created, including many KDE configuration files." The article mentions that similar losses can come from some other modern filesystems, too. Update: 03/11 21:30 GMT by T : Headline clarified to dispel the impression that this was a fault in Ext4.
Not a bug (Score:5, Informative)
https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/317781/comments/45 [launchpad.net]
https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/317781/comments/54 [launchpad.net]
Bull (Score:4, Insightful)
Re:Bull (Score:5, Funny)
In fact, there is no such thing as an OS bug! All good programmers should re-implement essential and basic operating system features in their user applications whenever they run into so-called "OS bugs." If you question this, you must be a bad programmer, obviously.
Exactly. (Score:5, Insightful)
People keep making arguments about the spec, but this seems like a case of throwing the baby out with the bathwater. The spec is intended to serve the interest of robustness, not the other way around; demolishing robustness and then citing the spec is forgetting why there is a spec in the first place.
Yes, you can design something that's intentionally brain-dead, but still true to spec as a kind of intellectual exercise about extremes, but in the real world, the idea should be the opposite:
Stay true to the spec and try to robustly handle as many contingencies as is possible. Both developers should do this, filesystem and application, not "just" one or the other.
It's not enough just to be true to spec; the idea is to get something that works as well, not jump through hoops to cleverly demonstrate that the spec does not protect against all possible bad outcomes.
It's the bad outcomes that we're trying to mitigate by having a spec in the first place!
So my point: what exactly is wrong with meeting the spec and trying to prevent serious problems by other coders from affecting your own code? I thought this was a basic part of coding: even if someone else is an idiot programmer, that doesn't make it okay to let the whole system fall down. Or did we all miss the part where we went for protected memory access and pre-emptive multitasking? Hell, if everybody had just been a great programmer, none of that would have been needed.
The point is to have a working system by following the spec and to try to clean up behind other programmers when they don't as much as possible within your own spec-compliant code. The point is not simply to "meet spec" and the actual utility of the system or vulnerability to the mistakes of others be damned.
Re:Exactly. (Score:4, Insightful)
``It's not enough just to be true to spec;''
Yes, it is. That way, you get what the spec says you get.
It can even be argued that doing better than the spec is dangerous. After all, that is what got us this riot: things doing more than the spec said, people relying on that, and then getting angry when another implementation of the spec didn't have the same additional features.
You can only assume that you get what the spec says you get. If you assume more, it's your problem if your assumptions are wrong. If you want more than the spec gives you, you either need to implement it yourself or get a new spec implemented.
``the idea is to get something that works as well, not jump through hoops to cleverly demonstrate that the spec does not protect against all possible bad outcomes.''
I don't think anyone jumped through hoops to cleverly demonstrate that the spec does not protect against all possible bad outcomes. I think they jumped through hoops to get the best possible performance, while still being conformant to the spec. If this breaks applications that rely on behavior that isn't in the spec, it's because those applications are buggy.
``It's the bad outcomes that we're trying to mitigate by having a spec in the first place!''
I agree completely. But we seem to differ in how this is supposed to work.
I say that specifications can be used to avoid bad results by specifying exactly what can be relied on. Everything that is not in the specification is unspecified and thus cannot be relied on. Knowing this helps you write better software, because you know what you can assume, and what you have to write code for.
You seem to be saying that having a specification means we want to avoid bad results, so whomever implements the specification must do their best to avoid bad results, no matter what it says in the specification. I find that completely unreasonable.
Re:Bull (Score:5, Insightful)
The journal isn't being written before the data. Nothing is written for periods between 45-120 seconds so as to batch up the writing to efficient lumps. The journal is there to make sure that the data on disk makes sense if a crash occurs.
If your system crashes after a write hasn't hit the disk, you lose either way. Ext3 was set to write at most 5 seconds later. Ext4 is looser than that, but with associated performance benefits.
Re: (Score:3, Insightful)
Oh great... basing ext4 performance gains on caching writes in the OS for 2 minutes just means they will focus their optimizations in ways that will suck even worse than ext3 does for applications that can't afford the risk of enabling write caching...
man 2 fsync (Score:5, Informative)
The filesystem doesn't guarantee anything is written until you've called fsync and it has returned.
Re:Bull (Score:5, Insightful)
The filesystem should be hitting the metal about 0.001 microseconds after I call write() or whatever the function is.
If that's the behavior you expect, then you need to be running your apps under an OS like DOS, not POSIX or Windows (which both clearly specify that this is *not* how they function).
Re:Bull (Score:5, Insightful)
Ahh yes, I love developers like you. You assume your app is the only one running, and it must have full access to the entire IO bandwidth an HDD can provide.
And then an antivirus program updates while Firefox is starting and a video is transcoding, and your program either slows to a crawl or crashes after 30 seconds of not receiving or being able to write any data.
Recently I was playing Left4Dead when one of my HDDs in my RAID array died in a very audible way. All the drives spun down, then 3 of them came back online. IOPS went to zero for over 60 seconds. No data in or out to those devices!
Interestingly, Ventrilo kept running fine. Left4Dead completely froze, but a minute or so after the 3 drives came back online, it unfroze. (CPU catching up?) All the while I was freaking out on Ventrilo, much to my friends' amusement.
Pretty much everything else crashed, except for Portable Firefox... uTorrent crashed, but first it left corrupted files all over - appearing as undeletable folders, which require a format to remove.
Time for a disk wipe. Thank you, shitty developers! Next time, use the API properly, and if you must have it written to disk, sync it immediately after you write!
Re:Bull (Score:5, Insightful)
It's not going to happen immediately in any case. Some optimizations can only be done if you introduce a delay, and once introduced you have to deal with that there's a delay. Just because it's one second instead of a minute doesn't mean your computer can't crash in the precisely wrong moment.
While I'm not an expert in filesystems, I'd expect writing a single file to be at least 4 writes: inode, data, update the directory the file is in, and a bitmap to show space allocation. If there's a journal add a write for the journal. Each of those will require a seek due to all of these things being in different places on the disk in most filesystems.
So your 40 small files just turned into 400-500 seeks, which at 8ms each will take 1.6 to 2 seconds to complete.
Now let's suppose we can batch things up. We need to write the inode and data for each file, and can do just one seek for the directory (the same for all), and the bitmap and journal can be updated in one operation. Now we're down to 2 writes per file, giving 80 seeks, plus 3 for metadata, giving 83 seeks, which can be done in 0.6 seconds.
But what if we do delayed allocation and create the all the inodes and write all the data as one large contigous area? We're now down to 5 writes total, with a seek time of 40ms. The time needed to write the data can probably be disregarded, since modern disks easily write at 50MB/s, and those 40 files with metatata probably amount to less than 32K.
And with some optimization, we just reduced the time it takes to write your 40 files to just 2% of the unoptimized time.
You're not going to get this sort of improvement without some sort of delay. If you insist on a per-file write you'll get really, really awful performance on the sort of workload you're using as an example. And you can even see it in practice, just boot a DOS box, and do benchmarks with and without smartdrv. Running something like a virus scanner should show a huge difference in the presence of a cache.
Re:Bull (Score:5, Interesting)
Mount your filesystem with the "sync" option, that should do what you want I guess. Performance will be bad though.
There are only two ways to do this: either you do it completely synchronously, and get a guarantee of the write being done when the application is done writing, or you have a delay of arbitrary length. If you have a delay, even if it's 1ms, and you care about the possibility of something going wrong at that moment, the application has to deal with the possibility. Reducing the delay only makes it less likely, but given enough time it'll happen.
Even doing it fully synchronously you can run into problems. A file can be half written (it's written by the block, after all), and of those 40 files, perhaps one references data in another.
Point being, if such things are really a problem for the application, the application must do things correctly, by writing to temporary files, renaming, and writing in the right sequence so that even if something is interrupted in the middle the data on disk still makes sense.
Even if the FS does like you want and starts writing immediately, that won't save you from the fact that it has no clue how your file is internally structured, and will perform writes in fs-sized blocks. So your 10K sized file can be interrupted in the middle and get cut off at 4K in size after a crash. If your application then goes and chokes on that, there's no way the FS can fix that for you.
NCQ doesn't take care of half that's needed for safe writing to disk. Two problems for a start:
1. Your hard disk doesn't know about your filesystem's structure. Unless told otherwise, the HDD will happily reorder writes and update ondisk data first, journal second, leading to disk corruption. The hard disk can't magically figure out what's the right way to write the data so that it remains consistent, only the OS and the application can ensure that.
2. NCQ is limited to 32 commands anyway, the OS has to do handling on its own anyhow.
Because it's a simpler abstration. If you're not willing to learn or deal with the POSIX semantics, such as fsync and rename, and checking the return code of every system call, you can use something like sqlite that does it internally and saves you the effort, and returns one unique value that tells you whether the whole update worked or not.
Re: (Score:3, Insightful)
Why should synchronous writes be the default ? Programmers are already too lazy and/or stupid to add a simple fsync() where needed, why should we all drop what we're doing, make the slowest option the default, and then have to jump through hoops to make things workable again ?
If asynchronous writes are the biggest of your problems, you need to find yourself a new career. One that hopefully doesn't require meticulous attention to detail.
Re:Bull (Score:5, Insightful)
Does anyone else think that 150 second is a bit over the top in terms of writing to disk?
I could understand one or two seconds as you speculate more data might come that needs to be written.
5 seconds is a bit iffy, as with ext3.
150 seconds? That's surely a bug.
Comment removed (Score:5, Insightful)
Comment removed (Score:5, Insightful)
Re:Bull (Score:4, Insightful)
Its not a KDE issue. Its not a Gnome issue.
Its a file system risk issue, and it affects everything running on the bos.
The EXT4 developers have decided its ok to increase the risk window by 3000% and
risk a crash for a minute and 20 seconds in an attempt to gain a little
performance. (Damn little performance).
With EXT3 the risk window was 5 seconds. Now its 150 seconds.
Its ridiculous to move what should be a low-level data integrity function
out of the File System and inflict it on user-land code.
Re:Bull (Score:4, Insightful)
It is a KDE issue. Only userland knows which data is critical. Only userland knows whether data can ba backed up or not. The OS cannot enure full data integrity without massice negative performance impact, however much you may wish for it. So what the OS does is give you a way to tell it which data needs to be on disk and which data should be on disk in a while if nothing goes wrong.
There really is no other way of doing it. Unless you think fundamentally defective code is acceptable if the risk of getting hit is a bit smaller?
Re:Bull (Score:4, Insightful)
Data that userland applications WRITES TO DISK is critical. If the filesystem takes its sweet time about actually doing the write, it's not the application's fault. And no, calling fsync() or fdatasync() constantly is no good, because that really does make your performance poor.
Re:Bull (Score:5, Insightful)
dude, ALL data is critical.
If you really think that, then you should leave the aera of modern disk access and mount all your partitions with the "sync" option. Then none of your software will have to think about syncing. Of course all file access will be so slow that nobody will want to work with that system either.
Hmm. I wonder why "sync" is not a default mount option?
Re:Bull (Score:5, Insightful)
No. That is why we have fsync().
No file system will promise you data integrity with a power failure. That is why you should run with a UPS.
You can not depend on the write delay time. What happens if you get a really fast processor and say a really slow drive? Unless you are building software that only runs on ONE set of hardware you just can not do that.
This is a bug that was always in KDE and they got lucky up till now.
Re:Bull (Score:5, Informative)
This is NOT a bug. Read the POSIX documents.
Filesystem metadata and file contents is NOT required to be synchronous and a sync is needed to ensure they are syncronised.
It's just down to retarded programmers who assume they can truncate/rename files and any data pending writes will magically meet up a-la ext3 (which has a mount option which does not sync automatically btw).
RTFPS (Read The Fine POSIX Spec).
Re: (Score:3, Insightful)
Re:Bull (Score:5, Insightful)
Rewriting the same file over and over is known for being risky. The proper sequence is to create a new file, sync, rename the new file on top of the old one, optionally sync. In other words, app developers must be more careful of their doings, not put all blame to the filesystems. It's so much that an fs can do to avoid such bruhahas. Many other filesystems have similar behavior to the ext4 btw.
Re:Bull (Score:5, Insightful)
Bullshit. It is not a filesystem limitation. POSIX tells you what you can expect from file system calls. Data committed to disk as soon as an fwrite or fclose returns is not something you can or should expect. (And this is true of every OS I've used in the last 20 years.)
A great many crap programmers think APIs ought to do what they'd like them to. But APIs don't. At best they do what they are specified to do.
Re:Bull (Score:5, Informative)
It isn't a flaw. It is documented and the programmers didn't follow the docs. There is a specific command called fsync to flush the buffers to prevent the problem.
In fact here is a link to that call http://www.opengroup.org/onlinepubs/007908799/xsh/fsync.html [opengroup.org]
Yes if we had a prefect world we would have instant IO but we do not. The flaw is in the application plan and simple.
They didn't use the api properly and it really is just that simple.
Re:To Anonymous Coward: (Score:5, Informative)
mount -o sync. Enjoy your slow returns and strictly ordered writes.
Re:Bull (Score:5, Informative)
Ext3 doesn't write out immediately either. If the system crashes within the commit interval, you'll lose whatever data was written during that interval. That's only 5 seconds of data if you're lucky, much more data if you're unlucky. Ext4 simply made that commit interval and backend behavior different than what applications were expecting.
All modern fs drivers, including ext3 and NTFS, do not write immediately to disk. If they did then system performance would really slow down to almost unbearable speeds (only about 100 syncs/sec on standard consumer magnetic drives). And sometimes the sync call will not occur since some hardware fakes syncs (RAID controllers often do this).
POSIX doesn't define flushing behavior when writing and closing files. If your applications needs data to be in NV memory, use fsync. If it doesn't care, good. If it does care and it doesn't sync, it's a bad application and is flawed, plain and simple.
Re:Not a bug (Score:5, Insightful)
I disagree. "Writing software properly" apparently means taking on a huge burden for simple operations.
Quoting T'so:
"The final solution, is we need properly written applications and desktop libraries. The proper way of doing this sort of thing is not to have hundreds of tiny files in private ~/.gnome2* and ~/.kde2* directories. Instead, the answer is to use a proper small database like sqllite for application registries, but fixed up so that it allocates and releases space for its database in chunks, and that it uses fdatawrite() instead of fsync() to guarantee that data is written on disk. If sqllite had been properly written so that it grabbed new space for its database storage in chunks of 16k or 64k, and released space when it was no longer needed in similar large chunks via truncate(), and if it used fdatasync() instead of fsync(), the performance problems with FireFox 3 wouldn't have taken place."
In other words, if the programmer took on the burden of tons of work and complexity in order to replicate lots of the functionality of the file system and make it not the file system's problem, then it wouldn't be my problem.
I personally think it should be perfectly OK to read and write hundreds of tiny files. Even thousands.
File systems are nice. That's what Unix is about.
I don't think programmers ought to be required to treat them like a pouty flake: "in some cases, depending on the whims of the kernel and entirely invisible moods, or the way the disk is mounted that you have no control over, stuff might or might not work."
Re:Not a bug (Score:5, Interesting)
I personally think it should be perfectly OK to read and write hundreds of tiny files. Even thousands.
It is perfectly OK to read and write thousands of tiny files. Unless the system is going to crash while you're doing it and you somehow want the magic computer fairy to make sure that the files are still there when you reboot it. In that case, you're going to have to always write every single block out to the disk, and slow everything else down to make sure no process gets an "unreasonable" expectation that their is safe until the drive catches up.
Fortunately his patches will include an option to turn the magic computer fairy off.
Re:Not a bug (Score:4, Informative)
Your expectation is quite reasonable. When the application writes something to disk, it should be there on disk, right? The way the article is presented makes it sound like a horrible bug in ext4 that it doesn't do this. But believe it or not, almost no filesystem provides this guarantee by default. ext3 doesn't (in the default mode), nor does ext2, nor a typical implementation of FAT or NTFS or the Minix filesystem or whatever.
For decades now it has been an accepted trade-off that the filesystem can hold back disk writes and do them later, giving better disk performance at the expense of losing data if there is a crash. Losing file data is bad but losing metadata is even worse, since corrupt filesystem metadata can trash the contents of many files and requires a lengthy fsck on startup. So journalling filesystems, as typically configured, keep a journal for metadata so it's not corrupted even if the power gets cut at the most inconvenient moment. But they don't extend the same care to file contents, because it would be too slow. You can enable it by setting the data=journal parameter in ext3 (and I guess ext4 too) but this isn't the detail.
It is certainly a bit unfair that the filesystem takes such pains with its own bookkeeping information but doesn't bother to be so careful about user data. But as I said, it's a known tradeoff to get better performance. If you want to be sure your file has reached disk you need to fsync(). This sucks, but it's the Unix way, and has been so for like, forever. So it's not a bug in ext4 - just bad luck and perhaps a misunderstanding between kernel and userspace about what guarantees the filesystem provides.
As SSDs replace rotating storage, there is less need to buffer writes (certainly the need to minimize seek time goes away, and that's the biggest reason), so we might see this whole situation resolved within a few years. Perhaps in 2015, when the system call returns, you can be sure that the data is written. Until that longed-for day, bear in mind that your filesystem is permitted to temporarily lie to you about what has been written, and call fsync() if you are paranoid.
Re:Not a bug (Score:5, Informative)
A file system should take my data buffer, and after saying "Ok, I got it"
There's your problem, you didn't even bother to ask if it got it, you just threw a ton of data into the file descriptor and closed it, now didn't you. And you want me on thedailywtf?
But lets back up here, because there's more than just people too lazy to call fsync() in order to ask the file system to write the data to the disk and say "Ok, I got it".
All that stuff about creating a backup copy and doing this and that, has to happen inside the file system.
The filesystem does exactly what you tell it to do. If you don't want it to make a zero byte file, then DON'T USE O_TRUNC OR *truncate() TO EMPTY YOUR FILE. Make a new file, fill it up, rename it over the other file. Don't assume that in just a few instructions, you're going to be filling it back up with new data, because those instructions may never arrive.
You don't like it? Try and convince people that (open file, erase all the data in it, do some stuff, write some data, do some more stuff, write some more data, write data to disk, close file) should be an uninterruptable atomic operation. You want a versioning filesystem? Take your pick [wikipedia.org].
Re: (Score:3, Insightful)
Translation: "Our filesystem is so fucked up, even SQL is better."
WTF is this guy thinking? UNIX has used hundreds of tiny dotfiles for configuration for years and it's always worked well. If this filesystem can't handle it, it's not ready for production. Why not just keep ALL your files
Re:Not a bug (Score:5, Insightful)
UNIX filesystems have used tiny files for years and they've had data loss under certain conditions. My favorite example is the XFS that would journal just enough to give you a consistent filesystem full of binary nulls on power failure. This behavior was even documented in their FAQ with the reply "If it hurts, don't do it."
Filesystems are a balancing act. If you want high performance, you want write caching to allow the system to flush writes in parallel while you go on computing, or make another overlapping write that could be merged. If you want high data security, you call fsync and the OS does its best possible job to write to disk before returning (modulo hard drives that lie to you). Or you open the damn file with O_SYNC.
What he's suggesting is that the POSIX API allows either option to programmers, who often don't know theres even a choice to be had. So he recommends concentrating the few people who do know the API in and out focus on system libraries like libsqllite, and have dumbass programmers use that instead. You and he may not be so far apart, except his solution still allows hard-nosed engineers access to low level syscalls, at the price of shooting their foot off.
Re:Not a bug (Score:5, Informative)
Quoting T'so:
"The final solution, is we need properly written applications and desktop libraries. The proper way of doing this sort of thing is not to have hundreds of tiny files in private ~/.gnome2* and ~/.kde2* directories. Instead, the answer is to use a proper small database like sqllite for application registries, but fixed up so that it allocates and releases space for its database in chunks, ...
Linux reinvents windows registry?
Who knows what they will come up with next.
Re:Not a bug (Score:5, Insightful)
It's called "gconf", and it's worse than that. It's no longer abandonware lurking at the heart of gnome but it's still a nightmare.
Re: (Score:3, Insightful)
Instead, the answer is to use a proper small database like sqllite for application registries
Yeah, linux should totally put in a Windows style registry. What the fuck is this guy on.
Re:Not a bug (Score:5, Funny)
That would be smart, but only if the SQL database is encrypted too. It's theoretically possible to read a registry with an editor, and we can't have that. Also, we need a checksum on the registry. If the checksum is bad, we have to overwrite the registry with zeroes. Registries are monolithic, and we have to make sure that either it's good data, or NONE of it is good data. Otherwise the user would get confused.
I am so excited about this that I'm going to start working on it just as soon as I get done rewriting all my userspace tools in TCL.
Re:Not a bug (Score:5, Insightful)
I personally think it should be perfectly OK to read and write hundreds of tiny files. Even thousands.
To paraphrase https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/317781/comments/54 [launchpad.net] : You certainly can use tons of tiny files, but if you want to guarantee your data will still be there after a crash, you need to use fsync. And if that causes performance problems, then perhaps you should rethink how your application is doing things.
Re: (Score:3, Insightful)
Indeed. And that is what the suggestion about using a database was all about. You still can use all the tiny files. And there are better options than syncing for reliability. For example, rename the file to backup and then write a new file. The backup will still be there and can be used for automated recovery. Come to think of it, any decent text editor does it that way.
Tuncating critical files without backup is just incredibly bad design.
Re:Not a bug (Score:5, Insightful)
It seems exceedingly odd that issuing a write for a non-zero-sized file and having it delayed causes the file to become zero-size before the new data is written.
Generally when one is trying to maintain correctness one allocates space, places the data into it and only then links the space into place (paraphrased from from Barry Dwyer's "One more time - how to update a master file", Communications of the ACM, January 1981).
I'd be inclined to delay the metadata update until after the data was written, as Mr. Tso notes was done in ext3. That's certainly what I did back in the days of CP/M, writing DSA-formated floppies (;-))
--dave
Re: (Score:3, Informative)
It seems exceedingly odd that issuing a write for a non-zero-sized file and having it delayed causes the file to become zero-size before the new data is written.
But you never create and write to a file as a single operation, there's always one function call to create the file and return a handle to it, and then another function call to write the data using the handle. The first operation writes data to the directory, which is itself a file that already exists, the second allocates some space for the file, writes to it, and updates the directory. Having the file system spot what your application is trying to do and reversing the order of the operations would be... t
Re:Not a bug (Score:5, Informative)
Let's not forget that the only consequence of delayed allocation is the write-out delay changing. Instead of data being "guaranteed" on disk in 5 seconds, that becomes 60 seconds.
Oh dear God, someone inform the president ! Data that is NEVER guaranteed to be on disk according to spec is only guaranteed on disk after 60 seconds.
You should not write your application to depend on filesystem-specific behavior. You should write them to the standard, and that means fsync(). No call to fsync, look it up in the documentation (man 2 write).
The rest of what Ted T'so is saying is optimization, speeding up the boot time for gnome/kde, it is not necessary for correct workings.
Please don't FUD.
You know I'll look up the docs for you :
(quote from man 2 write)
NOTES
A successful return from write() does not make any guarantee that data has been committed to disk. In fact, on some buggy implementations, it does not even guarantee
that space has successfully been reserved for the data. The only way to be sure is to call fsync(2) after you are done writing all your data.
If a write() is interrupted by a signal handler before any bytes are written, then the call fails with the error EINTR; if it is interrupted after at least one byte has
been written, the call succeeds, and returns the number of bytes written.
That brings up another point, almost nobody is ready for the second remark either (write might return after a partial write, necessitating a second call)
So the normal case for a "reliable write" would be this code :
size_t written = 0; // error handling code, at the very least looking at EIO, ENOSPC and EPIPE for network sockets
int r = write(fd, &data, sizeof(data))
while (r >= 0 && r + written sizeof(data)) {
written += r;
r = write(fd, &data, sizeof(data));
}
if (r 0) {
}
and *NOT*
write(fd, data, sizeof(data)); // will probably work
Just because programmers continuously use the second method (just check a few sf.net projects) doesn't make it the right method (and as there is *NO* way to fix write to make that call reliable in all cases you're going to have to shut up about it eventually)
Hell, even firefox doesn't check for either EIO or ENOSPC and certainly doesn't handle either of them gracefully, at least not for downloads.
Re:Not a bug (Score:5, Informative)
Re: (Score:3, Insightful)
I disagree. "Writing software properly" apparently means taking on a huge burden for simple operations.
No. Writing software properly means calling fsync() if you need a data guarantee.
Pretty sure no one in their right mind would call using fsync() barriers a "huge burden". There are an enormous number of programs out there that do this correctly already.
And then there are some that don't. Those have problems. They're bugs. They need to be fixed. Fixing bugs is not a "huge burden", it's a necessary task.
Re:Not a bug (Score:4, Insightful)
In other words, if the programmer took on the burden of tons of work and complexity in order to replicate lots of the functionality of the file system and make it not the file system's problem, then it wouldn't be my problem.
I couldn't agree more. A filesystem *is* a database, people. It's a sort of hierarchical one, but a database nonetheless.
It shouldn't care if there's some mini-SQL thing app sitting on top providing another speed hit and layer of complexity or just a bunch of apps making hundreds of f{read|write|open|close|sync}() calls against hundreds of files. Hundreds of files, while cluttered, is very simple and easily debugged/fixed when something gets trashed. Some sort of obfuscated database cannot be fixed with mere vi. (Emacs, maybe, but only because it probably has 17 database repair modules built in, right next to the 87 kitchen sinks that are also included.)
I do rather agree that it's not a bug. An unclean shutdown is an unclean shutdown, and Ts'o is right - there's not a defined behaviour. Ext4 is better at speed, but less safe in an unstable environment. Ext3 is safer, but less speedy. It's all just trade-offs, folks. Pick one appropriate to your use. (Which is why, when I install Jaunty, I'll be using Ext3.)
Re:Not a bug (Score:5, Informative)
You're right. The correct thing to do is to *always* call fsync() when you need a data guarantee, *regardless* of which FS you're on. The fact that not doing it in the past hasn't caused problems isn't the problem- those calls are the correct way of handling things.
Re: (Score:3)
Unix philosophy is to make configuration files user- and script-editable. NOT to create hundreds of files per app making it utterly unmanageable.
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
lol.
It's a consequence of a filesystem that makes bad assumptions about file size.
I suppose in your world, you open a single file the size of the entire filesystem and just do seek()s within it?
It's a bug. A filesystem which does not responsibly handle any file of any size between 0 bytes and MAXFILESIZE is bugged.
Deal with it and join the rest of us in reality.
Re:Not a bug (Score:5, Insightful)
Re:Not a bug (Score:5, Insightful)
The benefit of journaling file systems is that after the crash you still have a file system that works. How many folks remember when Windows would crash, resulting in a HDD that was so corrupted the OS wouldn't start. Same with ext2.
If these folks don't like asynchronous writes, they can edit their fstab (or whatever) to have the sync option so all their writes will be synchronous and the world will be a happy place.
Note that they will also have to suffer a slower system, and possible shortened lifetime of their HDD, but at least there configuration files will be safe.
Re:Not a bug (Score:5, Informative)
Er, actually it removes the previous data, then waits to replace it for long enough that the probability of noticing the disappearance approaches unity on flaky hardware (;-))
--dave
Re:Not a bug (Score:5, Informative)
It just loses recent data if your system crashes before it has flushed what it's got in RAM to disk.
No, that's the bug. It loses ALL data. You get 0 byte files on reboot.
Re:Not a bug (Score:4, Insightful)
Close, but no cigar. The data we need safe is the one already on the disk: if you don't flush, you get to keep the old version already on the disk.
That's an interesting interpretation of fsync(), but, unfortunately, one that's not supported by the POSIX spec. Nowhere it says that the system cannot flush the data that you've already written so far without an explicit fsync() call. If you're unlucky enough that this happened after you've truncated the file, but before you wrote anything into it - well, too bad. As I understand, ext3 could also exhibit this behavior, it was simply harder to reproduce because the implicit flushes were much more frequent.
Anyway, this post [slashdot.org] seems to explain what's actually going on there in the (very specific) case of KDE.
Re:Not a bug (Score:5, Insightful)
No. It's not.
If what you say is true there would be no need for the fsync() function (and related ones).
Read the standards if you want. The filesystem is only bugged if it loses recent data under conditions where the application has asked it to guarantee that the data is safe. If the app hasn't asked for any such guarantee by calling fsync() or the like, the filesystem is free to do as it likes.
Re: (Score:3, Interesting)
That's your filesystem definition. Even there, I can guarantee you it can't be built, thus, from your point of view, no file system will ever not be bugged.
How come ?
I open a file
I write one byte
I close the file
Data is not on disk BECAUSE IT WAS FULL and you failed to plan for intercepting errors / warnings.
The filesystems needs to be used along with their specifications, not the way you'd want them to work.
Re: (Score:3, Interesting)
Re:Not a bug (Score:5, Insightful)
As an application developer, the last thing I want to worry about is whether or not the fraking filesystem is going to persist my data to disk.
As an application developer, you are expected to know what the API does, in order to use it correctly. What Ext4 is doing is 100% respectful of the spec.
Actually, no. (Score:3)
Re:Actually, no. (Score:4, Insightful)
If those high level wrappers do not exist, then do not blame the API developers for you not knowing how they work.
Re: (Score:3, Interesting)
As a user of a framework that doesn't suck, I don't have to worry about this problem. When I need to write a file in such a way that the entire operation either succeeds, or the entire operation fails (a common requirement), the framework I use provides a flag that I can set on the write operation to do all of the write/rename juggling that needs to happen, according to POSIX, to make it work. As such, my code will work happily on any filesystem that doesn't break the spec.
If you are using a high-level l
Re:Not a bug (Score:4, Insightful)
You're welcome to write lots of little files. It will just be slow if you sync them all, or unsafe if you don't.
Same way a database will tell you to wrap lots of actions in a single transaction if you don't want the cost of a full commit after each action.
Except the filesystem API doesn't have any way to says "commit these 500 little files in a single transaction", unfortunately.
Annoyingly, it also doesn't have "unlink this directory and the files inside it in a single transaction", because unlink performance blows goats.
Re:Not a bug (Score:4, Insightful)
The idiocy is in expecting the FS to do something it was never asked to do. There is one way to commit data to disk in Posix systems. That function has existed for well over 20 years. It's probably going on 35 years now, but I don't know my Unix history well enough to be sure.
I think the problem is with more and more people beliving themselves to be good programmers, when they really do not undertstand what they are doing. Truncating and then writing critical files is a very bad idea to begin with. The way you do it is to rename the old file to backup and write to a new file. Also have a procedure in place to recover from backup if the main file is broken. Maybe even to checksums on the main file. In addition, only write if you have to. That is robust design, not the amateur-level truncate the KDE folks seem to be doing routinely.
Re: (Score:3, Informative)
The point of a journal is to allow the file system to return to a defined state in the case the unexpected happens. This keeps the whole file system from being fucked by a crash or sudden data loss. It's better to know you lost some data, then have the filesystem in a state where some data is corrupt but you have no way to tell where or what it is. The situation here is ext 4 has increased the timeframe between commits. This increases performance at the cost of losing more data if a crash happens. Tota
Re: (Score:3, Informative)
The point of having a rock-solid filesystem is to have a rock-solid filesystem. Any filesystem that crashes and loses data is bad. What is the point of a journal again? To enforce someone's idea of how an API should be coded to, or to reduce data loss?
ext4 did not crash. Ext4 also did not lose any data it claimed to have gotten to disk. However, unless you want the filesystem slower by a factor of 10x....100x, you have to delay writes. And that means your data is only reliably on disk after an fsync. Any go
Don't worry (Score:5, Funny)
Don't worry guys, I read the summary this time, and it only affects the German version of ext4.
Re:Don't worry (Score:4, Funny)
Makes perfect sense: Germans are rediculously punctual, if the allocation is delayed you just KNOW something is terribly wrong.
Re: (Score:3, Funny)
OMG, you expect me to RTFA??!! In a BUGzilla?
pr0n (Score:5, Funny)
Works as expected... (Score:5, Insightful)
The problem here is that delaying writes speeds up things greatly but has this possible side-effect. For a shorter commit time, simply stay with ext3. You can also mount your filesystems "sync" for a dramatic performance hit, but no write delay at all.
Anyways, with moderen filesystems data does not go to disk immediately, unless you take additional measures, like a call to fsync. This should be well known to anybody that develops software and is really not a surprise. It has been done like that on server OSes for a very long time. Also note that there is no loss of data older than the write delay period and this only happens on a system crash or power-failure.
Bottom line: Nothing to see here, except a few people that do not understand technology and are now complaining that their expectations are not met.
Re:Works as expected... (Score:5, Insightful)
Nothing to see here, except a few people that do not understand technology and are now complaining that their expectations are not met.
You're right, there really is nothing to see here. Or rather, there's nothing left. As the article says, a large number of configuration files are opened and written to as KDE starts up. If KDE crashes and takes the OS with it (as it apparently does), those configuration files may be truncated or deleted entirely -- the commands to re-create and write them having never been sync'd to disk. As the startup of KDE takes longer than the write delay, it's entirely possible for this to seriously screw with the user.
The two problems are:
1. Bad application development. Don't delete and then re-create the same file. Use atomic operations that ensure that files you are reading/writing to/from will always be consistent. This can't be done by the Operating System, whatever the four color glossy told you.
2. Bad Operating System development. If an application kills the kernel, it's usually the kernel's fault (drivers and other code operating in priviledged space is obviously not the kernel's fault) -- and this appears to be a crash initiated from code running in user space. Bad kernel, no cookie for you.
Re:Works as expected... (Score:4, Insightful)
I agree on both counts. Some comments
1) The right sequence of events is this: Rename old file to backup name (atomic). Write new file, sync new file and then delete the backup file. It is however better for anything critical to keep the backup. In any case an application should offer to recover from the backup if the main file is missing or broken. To this end, add a clear end-mark that allows to check whether the file was written completely. Nothing new or exciting, just stuff any good software developer knows.
2) Yes, a kernel should not crash. Occasionally it happens nonetheless. It is important to notice that ext4 is blameless in the whole mess (unless it causes the crash).
Re:Works as expected... (Score:5, Insightful)
That's an improvement, but it can be made even safer by skipping the delete step. Once the new file is created just rename it on top of the original. The rename system call guarantees that at any point in time the name will refer to either the old or the new file. I'm not sure you really need the sync step. I haven't read the spec in that kind of detail, but if that sync step is really necessary I'd call that a design flaw. The file system may delay the write of the file as well as the rename, but it shouldn't perform the rename until the file has actually been written.
Translation (Score:4, Insightful)
We use techniques that show great performance so people can see we beat ext3 and other filesystems.
Oh shit, as a tradeoff we lose more data in case of a crash. But it's not our fault.
Honestly, you cannot eat your cake and have it too.
Re:Exactly (Score:5, Insightful)
Re:Exactly (Score:5, Insightful)
The problem is not the many small files, but the missing disk sync. The many small files just make the issue more pbvous.
True, with ext4 this is more likely to cause problems, but any delayed write can cause this type of issue when no explicit flush-to-disk is done. And lets face it: fsync/fdatasync are not really a secret to any competent developer.
What however is a mistake, and a bad one, is making ext4 the default filesystem at this time. I say give it another half year, for exactly this type of problem.
Re:Exactly (Score:5, Insightful)
"And lets face it: fsync/fdatasync are not really a secret to any competent developer."
I disagree. Users of high-level languages (especially those that are cross-platform) are not necessarily aware of this situation, and arguably should not need to be.
And I disagree with your disagreement. This is something any competent developer has to know. There are fundamental limits in practical computing. This is one. It cannot be hidden without dramatic negative effects on performance. It is not a platform-specific problem. It is not a language-specific problem. It is not a hidden issue. A simple "man close" will already tell you about it. Any decent OS course will cover the issue.
I reiterate: Any good developer knows about write-buffering and knows at least that extra measures have to be taken to ensure data is on disk. Those that do not are simply not good developers.
Classic tradeoff (Score:5, Insightful)
It's amazing how fast a filesystem can be if it makes no guarantees that your data will actually be on disk when the application writes it.
Anyone who assumes modern filesystems are synchronous by default is deluded. If you need to guarantee your data is actually on disk, open the file with O_SYNC semantics. Otherwise, you take your chances.
Moreover, there's no assertion that the filesystem was corrupt as a result of the crash. That would be a far more serious concern.
Re:Classic tradeoff (Score:4, Informative)
Its even WORSE than just being asynchronous:
EXT4 reproducably delays write ops, but commits journal updates concerning this write.
Re: (Score:3, Interesting)
Even if you use O_SYNC, or fsync() there is no guarantee that the data are safely stored on disk.
You also have to disable HDD caching, e.g., using /dev/hda1
hdparm -W0
Re: (Score:3, Insightful)
Even if you use O_SYNC, or fsync() there is no guarantee that the data are safely stored on disk.
You also have to disable HDD caching, e.g., using /dev/hda1
hdparm -W0
Well, yes, but unless you have an extreme write pattern, the disk will not take long to flush to platter. And this will only result in data loss on power failure. If that is really a concern, get an UPS.
Theory doesn't matter; practice does (Score:3, Interesting)
So, POSIX never guarantees your data is safe unless you do fsync(). So, ext3 was not 100% safer either. So, it's the applications' fault that they truncate files before writing.
But it doesn't matter what POSIX says. It doesn't matter where the fault belongs to. To the users, a system either works nor not, as a whole.
EXT4 aims to replace EXT3 and becomes the next gen de-facto filesystem on Linux desktop. So it has to compete with EXT3 in all regards; not just performance, but data integrity and reliability as well. If in the common scenarios people lose data on EXT4 but not EXT3, the blame is on EXT4. Period.
It's the same thing that a kernel does. You have to COPE with crappy hardware and user applications, because that's your job.
Re:Theory doesn't matter; practice does (Score:5, Insightful)
This is the attitude that has the web stuck with IE.
There's a standard out there called POSIX. It's just like an HTML or CSS standard. If everyone pays attention to it, everything works better. If you fail to pay attention to it for your bit (writing files or writing web pages), it's not *my* fault if my conforming implementation (implementing the writing or the rendering) doesn't magically fix your bugs.
Re:Theory doesn't matter; practice does (Score:4, Insightful)
Apparently, you don't know real life.
Does POSIX tell you what happens if your OS crashes? That's right, it says "undefined". Oops, sorry, it's too hard a problem and we'll just leave it to you OS implementers.
Asking everyone to use fsync() to ensure their data not being lost is insane. Nobody want to pay that kind of performance penalty unless the data is very critical.
Normal applications have a reasonable expectation that the OS doesn't crash, or doesn't crash too often for this to be a big problem. However, shit happens, and people scream loud if their data is lost BEYOND reasonable expectations.
Forget POSIX. It's irrelevent in the real world. It's exactly this pragmatic attitude that brought Linux to its current state.
Re:Theory doesn't matter; practice does (Score:4, Insightful)
Apparently, you don't know how to *deal* with real life.
POSIX *does* tell you what happens if your OS crashes. It says "as an application developer, you cannot rely on things in this instance." It also provides mechanisms for successfully dealing with this scenario.
As for fsync() being a performance issue, you can't have your cake and edit it too. If you don't want to pay a performance penalty, you can lose data. Ext4 simply only imparts that penalty to those applications that say they need it, and thereby gives a performance boost to others who are, due to their code, effectively saying "I don't particularly care about this data" - or more specifically, "I can accept a loss risk with this data."
Normal applications have a reasonable expectation that the OS doesn't crash, yes. And usually it doesn't. Out of all the installs out there... how often is this happening? Not very. They've made a performance-reliability tradeoff, and as with any risk... sometimes it's the bad outcome that occurs. If they don't want that to happen, they need to take steps to reduce that risk- and the correct way to do that has always been available in the API.
As for forgetting POSIX... it's the basis of all unix cross-platform code. It's what allows code to run on linux, BSD, Solaris, MacOS X, embedded platforms, etc, without (mostly) caring which one they're on. It's *highly* relevant to the real world because it's the API that most programs not written for windows are written to. Pull up a man page for system calls and you'll see the POSIX standard referenced- that's where they all came from.
Saying "Forget POSIX. It's irrelevant in the real world." is like people saying a few years ago "Forget CSS standards. It's irrelevant in the real world." And you know what? That's the attitude that's dying out in the web as everything moves toward standards compliance. So it is in this case with the filesystem.
Re: (Score:3, Insightful)
"The machine crashed" isn't a common situation. In fact, it's a very, very rare situation.
Excuses are false. This is a severe flaw. (Score:3, Interesting)
This is all a bunch of BS! Delayed writes should lose at most any data between commit and actual write to disk. Ext4 loses the complete files (even their content before the write).
ZFS can do it: it writes the whole transaction to disk or rolls back in case of a crash, so why not ext4? These lame excuses that this is totally "expected" behavior is a shame!
Re:Excuses are false. This is a severe flaw. (Score:4, Informative)
Delayed writes should lose at most any data between commit and actual write to disk. Ext4 loses the complete files (even their content before the write).
You seem to misunderstand that's *exactly* what is happening.
KDE is *DELETING* all of its config files, then writing them back out again in two operations.
Three states now exist, the 'old old' state, where the original file existed, the 'old' state, where it is empty, and the 'new' state where it is full again.
The problem is getting caught between step #2 and step #3, which on ext3 was mostly mitigated by the write delay being only 5 seconds.
KDE is *broken* to delete a file and expect it to still be there if it crashes before the write.
Re: (Score:3, Insightful)
Re:Excuses are false. This is a severe flaw. (Score:5, Informative)
Nope, it writes a new file and then renames it over the old file, as rename() says it is an atomic operation - you either have the old file or the new file. What happens with ext4 is that you get the new file except for its data. While that may be correct from a POSIX lawyer pont of view, it is still heavily undesirable.
rename and fsync (Score:4, Insightful)
"Nope, it writes a new file and then renames it over the old file, as rename() says it is an atomic operation - you either have the old file or the new file. What happens with ext4 is that you get the new file except for its data. "
Two things are happening:
(1) KDE is writing a new inode.
(2) KDE is renaming the directory entry for the inode, replacing an existing inode in the process.
KDE never calls fsync(2), so the data from step one is not committed to be on disk. Thus, KDE is atomically replacing the old file with an uncommitted file. If the system crashes before it gets around to writing the data, too bad.
EXT4 isn't "broken" for doing this, as endless people have pointed out. The spec says if you don't call fsync(2) you're taking your chances. In this case, you gambled and lost.
KDE isn't "broken" for doing this unless KDE promised never to leave the disk in an inconsistent state during a crash. That's a hard promise to keep, so I doubt KDE ever made it.
A system crash means loss of data not committed to disk. A system crash frequently means loss of lots of other things, too. Unsaved application data in memory which never even made it to write(2). Process state. Service availability. Jobs. Money. System crashes are bad; this should not be news.
The database suggestion some are making comes from the fact that if you want on-disk consistency *and* good performance, you have to do a lot of implementation work, and do things like batching your updates into calls to write(2) and fsync(2). Otherwise, performance will stink. This is a big part of what databases do.
As someone else suggested, it's perfectly easy to make writes atomic in most filesystems. Mount with the "sync" option. Write performance will absolutely suck, but you get a never-loses-uncommitted-data filesystem.
Re: (Score:3, Informative)
ZFS can do it: it writes the whole transaction to disk or rolls back in case of a crash, so why not ext4? These lame excuses that this is totally "expected" behavior is a shame!
I read the FA, and it actually really does look like the applications are simply using stupidly risky practices:
These applications are truncating the file before writing (i.e., opening with O_TRUNC), and then assuming that the truncation and any following write are atomic. That's obviously not true -- what happens if your system is very busy (not surprising in the startup flurry which is apparently where this stuff happens), the process doesn't get scheduled for a while after the truncate (but before the
not mounted sync,dirsync? (Score:5, Interesting)
When I write data to a file (either through a descriptor or FILE *), I expect it to be stored on media at the earliest practical moment. That doesn't mean "immediately", but 150 seconds is brain-damaged. Regardless of how many reads are pending, writes must be scheduled, at least in proportion of the overall file system activity, or you might as well run on a ramdisk.
While reading/writing a flurry of small files at application startup is sloppy from a system performance point of view, data loss is not the application developers' fault, it's the file system designers'.
BTW, I write drivers for a living, have tuned SMD drive format for performance, and written microkernels, so this is not a developer's rant.
Alarmist and ignorant article - not a "problem" (Score:5, Insightful)
*No* modern, desktop-usable file systems today guarantee new files to be there if the power goes out except if the application specifically requests it with O_SYNC, fsync() and similar techniques (and then only "within reason" - actually the most guarantee that the file system will recover itself, not the data). It is universally true - for UFS (the Unix file system), ext2/3, JFS, XFS, ZFS, raiserfs, NTFS, everything. This can only be a topic for inexperienced developers that don't know the assumptions behind the systems they use.
The same is true for data ordering - only by separating the writes with fync() can one piece of data be forced to be written before another.
This is an issue of great sensitivity for databases. See for example:
That there exist reasonably reliable databases is a testament that it *can* be done, with enough understanding and effort, but is not something that's automagically available.
Re:If in other "modern" filesystems.... (Score:4, Insightful)
Re:If in other "modern" filesystems.... (Score:4, Insightful)
I'll take "I didn't lose my data" over "ext4 runs 1.5x faster than ext3," thank you. What use is performance to me if I have to be absolutely certain that it won't crash, or I lose my (in my very high performance filesystem) data?
Also, ext4 is toted as having additional reliability checks to keep up with scalability, etc... not less reliable at expense of performance.
Reliability
As file systems scale to the massive sizes possible with ext4, greater reliability concerns will certainly follow. Ext4 includes numerous self-protection and self-healing mechanisms to address this.
(from Anatomy of ext4 [ibm.com])
I can only imagine the response if tests were done on Windows 7 beta that showed a crash after this or that resulted in loss of data. :)
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Nothing- except that it's not in the spec.
POSIX is like a contract. KDE is breaking the contract and then whining about it to ext4- which isn't breaking the contract. Just as in a court, KDE here doesn't have much of a leg to stand on.
Re:Why SHOULD applications have to assume bad FSs? (Score:4, Informative)
Whats wrong with "After a file is closed, its synced to disk"?!?
What, you want people to have to delay/stagger/coordinate their file closes in order to avoid overloading the filesystem? That is the wrong approach. close() just means that the application is done with the file. The sync calls are not a joke, they are there precisely for the reason that close() already has an antirely sensible but different semantics. Anybody that wants close also to sync can code it that way without problem. Anybody else probably does not want this behaviour in the first place.
This is not hidden in any way. A simple "man close" not warns of this, it also refers the reader to the fsync call. Anybody getting bitten by this did not no their homework.