Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage IT Linux

Btrfs Is Getting There, But Not Quite Ready For Production 268

An anonymous reader writes "Btrfs is the next-gen filesystem for Linux, likely to replace ext3 and ext4 in coming years. Btrfs offers many compelling new features and development proceeds apace, but many users still aren't sure whether it's 'ready enough' to entrust their data to. Anchor, a webhosting company, reports on trying it out, with mixed feelings. Their opinion: worth a look-in for most systems, but too risky for frontline production servers. The writeup includes a few nasty caveats that will bite you on serious deployments."
This discussion has been archived. No new comments can be posted.

Btrfs Is Getting There, But Not Quite Ready For Production

Comments Filter:
  • Read their website (Score:5, Informative)

    by Anonymous Coward on Friday April 26, 2013 @09:11AM (#43555657)

    It says "experimental." They appreciate you helping them test their file system out. I appreciate it too, so please do. But remember that you are testing an experimental filesystem. When it eats your data, make sure you report it and have backups.

    • by Anonymous Coward on Friday April 26, 2013 @09:31AM (#43555895)

      Ugh, I'm really sorry about this post, Slashdot. I really didn't think it was going to a "First post." What I really meant to post was

      OMFG fr1st psot!!!! APK!! crazy host file conspiracy! /etc/mod_me_down

    • by pipatron ( 966506 ) <pipatron@gmail.com> on Friday April 26, 2013 @09:41AM (#43556029) Homepage

      Every file system is/should be labled "experimental" in a way. The long answer from the btrfs FAQ is pretty good, and makes some sense:

      Long answer: Nobody is going to magically stick a label on the btrfs code and say "yes, this is now stable and bug-free". Different people have different concepts of stability: a home user who wants to keep their ripped CDs on it will have a different requirement for stability than a large financial institution running their trading system on it. If you are concerned about stability in commercial production use, you should test btrfs on a testbed system under production workloads to see if it will do what you want of it. In any case, you should join the mailing list (and hang out in IRC) and read through problem reports and follow them to their conclusion to give yourself a good idea of the types of issues that come up, and the degree to which they can be dealt with. Whatever you do, we recommend keeping good, tested, off-system (and off-site) backups.

      • by Bengie ( 1121981 ) on Friday April 26, 2013 @10:38AM (#43556921)
        My cousin said when he had to go "FS shopping" for his research data center, they had some requirements, most notably, being used by several enterprises that all store at least 1PB of data on the FS and have not had any critical issues in 5 years.

        He said the only FS that fit-the-bill was ZFS. His team could not find an enterprise company that stored at least 1PB of data on ZFS and had a non-user caused critical problem within the past 5 years. That was many years ago and he has not had a single issue with his multi-PB storage that is being used by hundreds of departments.

        ZFS is not perfect, but it sets a very high bar.
        • by Zero__Kelvin ( 151819 ) on Friday April 26, 2013 @11:08AM (#43557341) Homepage
          Did your cousin also find out what exact hardware and exact code was used? If my friend has had no problems with filesystem $FS and then I use it with different hardware and code implementing it, then there is still a significant chance that I will have trouble that he did not. Filesystems all work perfectly, because they are conceptual. It is the implementation that may or may not be stable.
    • by Tarlus ( 1000874 )

      And make sure those backups aren't also on a btrfs volume.

    • by isopropanol ( 1936936 ) on Friday April 26, 2013 @10:20AM (#43556631) Journal

      Also, read the article. The authors were experimenting and came across some bugs in some pretty hairy edge cases (hundreds of simultaneous snapshots, large disk array suddenly becoming full, etc) that did not cause data loss. They eventually decided not to use BTRFS on one type of system but are using it on others.

      To me, the article was a good thing... But I would have preferred if it was worded as here are some edge case bugs that need fixing before BTRFS is used in our scenario, rather than that these were show stoppers... Because these are not likely show stoppers to anyone who's not implementing the exact same scenario.

      Also It sounds like they should jitter the start time of the backups...

      • Re: (Score:2, Insightful)

        by Tough Love ( 215404 )

        Bugs are like roaches. If you see one, you can be sure there are many others hiding in the cracks. There is no room for any bugs at all in a filesystem to which you will trust your essential data.

        • > There is no room for any bugs at all in a filesystem to which you will trust your essential data.

          Your ideology is admired except it is not practical :-(

          * So you are able to guarantee you are able to write 100% bug free code?

          * AND it can deal with hardware failures such as bad memory?

          I have a bridge to sell you :-)

          • I won't buy your bridge or move my systems away from Ext4 for the time being. BTW, E2fsck does a great job of repairing filesystems that have been corrupted (sometimes massively) by hardware failure of various kinds. This is an essential trick that ZFS and Btrfs have yet to learn.

      • by AvitarX ( 172628 )

        Being unfixable when full is a pretty big show stopper IMO.

        • by Harik ( 4023 ) <Harik@chaos.ao.net> on Friday April 26, 2013 @11:46AM (#43558049)

          It's an issue with any CoW filesystem being full - in order to delete a file, you need to make a new copy of the metadata that has the file removed, then a copy of the entire tree leading up to that node then finally copy the root - and once the root is committed, you can free up the no-longer in-use blocks. At least, as long as they're not still referenced by another snapshot.

          The alternative is to rewrite the metadata in place and just cross your fingers and hope you don't suffer a power loss at the wrong time, in which case you end up with massive data corruption.

          I've filled up large (for home use) BTRFS filesystems before - 6-10tb. The code does a fairly good job about refusing to create new files that would fill the last remaining bit so it leaves room for metadata CoW to delete. The problem may come from having a particularly large tree that requires more nodes to be allocated on a change then were reserved - in which case the reservation can be tuned.

          BTRFS isn't considered 'done' by any means. It was only in the 3.9 kernel that the new raid5/6 code landed, and other major features (such as dedup) are still pending. It's actually very encouraging that a work-in-progress filesystem is as solid as it is already.

      • When it comes to data safety, btrfs has been production ready for a few years already. There are issues with latency -- largely fixed -- and dealing with asinine abuse of fsync(). That's also mostly dealt with, although there's no real full fix other than fixing problematic software in the first place. There's no real way to have efficient cow/etc and fast fsync together, but you don't need the latter if the filesystem can do transactions for you.

        So we have a filesystem with a number of safety features b

    • by g1zmo ( 315166 )
      Netgear's consumer-level NAS products are now using btrfs [readynas.com]. This being the Internet and all, folks are complaining in forums and Facebook about...well if not about this then I guess it would be something else.
  • Happy with XFS (Score:3, Informative)

    by zidium ( 2550286 ) on Friday April 26, 2013 @09:13AM (#43555689) Homepage

    I've been happily using the XFS file system since the early-to-mid-2000s and have never had a problem. It is rock solid and much faster than ext3/ext4 in my experience, tested a lot longer than Btrfs, and handles the millions and millions of small files on redditmirror.cc very effectively.

    • Re:Happy with XFS (Score:4, Insightful)

      by h4rr4r ( 612664 ) on Friday April 26, 2013 @09:18AM (#43555727)

      It also has none of the features that make Btrfs exciting and modern.

      XFS is fine, so is Ext3/Ext4, but Linux need a modern file system.

    • Re:Happy with XFS (Score:4, Informative)

      by bored ( 40072 ) on Friday April 26, 2013 @09:22AM (#43555763)

      Your happy with XFS because your machine has never lost power or crashed. If either of those things happened with the older versions of XFS it was nearly a 100% guarantee you would lose data. Now i'm told its more reliable.

      So, if you told me you have been running it for the last year and it was reliable I would have given you more credit than claiming you have been running it for a decade and its been reliable. Because, its had some pretty serious issues that if you didn't hit them means your not a good test case.

      I'm still skeptical, because AKAIK, XFS still doesn't have an order data mode.

      • Re:Happy with XFS (Score:5, Informative)

        by MBGMorden ( 803437 ) on Friday April 26, 2013 @09:36AM (#43555951)

        Your happy with XFS because your machine has never lost power or crashed. If either of those things happened with the older versions of XFS it was nearly a 100% guarantee you would lose data. Now i'm told its more reliable.

        I don't know about being more reliable. I use XFS on my RAID array (mdadm) at home. I'm running the latest version of Linux Mint (Nadia), and if I ever lose poser and don't unmount that file system cleanly it looses all recent changes to the drive (and "recent" sometimes stretches to hours ago). The drive mounts fine and nothing appears corrupted (so I guess its not completely data loss), but any files changes (edits, additions, or deletions) to the file system are simply gone.

        Its gotten to the point where if I've just put a lot of stuff on the drive I unmount it and then remount it just to make sure everything gets flushed to disk. If I ever get a chance to rebuild that array it most certainly will be using something different.

      • Re:Happy with XFS (Score:5, Informative)

        by Booker ( 6173 ) on Friday April 26, 2013 @09:40AM (#43556015) Homepage

        No, that's FUD and/or misunderstanding on your part.

        "data=ordered" is ext3/4's name for "don't expose stale data on a crash," something which XFS has never done, with or without a mount option. ext3/4 also have "data=writeback" which means "DO expose stale data on a crash." XFS does not need feature parity for ill-advised options.

        Any filesystem will lose buffered and unsynced file data on a crash (http://lwn.net/Articles/457667/). XFS has made filesystem integrity and data persistence job one since before ext3 existed. Like any filesystem, it has had bugs, but implying that it was unsafe for use until recently is incorrect.

        I say this as someone who's been working on ext3, ext4 and xfs code for over a decade, combined.

        • Re:Happy with XFS (Score:5, Insightful)

          by bored ( 40072 ) on Friday April 26, 2013 @10:10AM (#43556483)

          No, that's FUD and/or misunderstanding on your part.

          "data=ordered" is ext3/4's name for "don't expose stale data on a crash," something which XFS has never done,

          Actually, I think your the one that doesn't understand how a journaling file system works. The problem with XFS has been that it only journals meta data, and the data portions associated with the metadata are not synchronized with the metadata updates (delayed allocation an all that). This means the metadata portions (filename, sizes, etc) will be correct based on the last journal update flushed to media, but the data referenced by that meta-data may not be.

          A filesystem that is either ordering its meta data/data updates against a disk with proper barriers, or journing the data alongside the meta data doesn't have this problem. The filesystem _AND_ its data remain in a consistent state.

          So, until your understand this basic idea, don't go claiming you know _ANYTHING_ about filesystems.

          • Re:Happy with XFS (Score:5, Informative)

            by Booker ( 6173 ) on Friday April 26, 2013 @12:42PM (#43559007) Homepage

            So, until your understand this basic idea, don't go claiming you know _ANYTHING_ about filesystems.

            Without sounding like too much of a jerk, I have hundreds of commits in the linux-2.6 fs/* tree. This is what I do for a living.
            I actually do have a pretty decent grasp of how Linux journaling filesystems behave. :)

            Test your assumptions on ext4 with default mount options. Create a new file and write some buffered data to it, wait 5-10 seconds, punch the power button, and see what you get. (You'll get a 0 length file) Or write a pattern to a file, sync it, overwrite with a new pattern, and punch power. (You'll get the old pattern). Or write data to a file, sync it, extend it, and punch power. (You'll get the pre-extension size). Wait until the kernel pushes data out of the page cache to disk, *then* punch power, and you'll get everything you wrote, obviously.

            XFS and ext4 behave identically in all these scenarios. Maybe you can show me a testcase where XFS misbehaves in your opinion? (bonus points for demonstrating where XFS actually fails any posix guarantee).

            Yes, ext3/4 have data=journaled - but its not default, and with ext4, that option disables delalloc and O_DIRECT capabilities. 99% of the world doesn't run that way; it's slower for almost all workloads and TBH, is only lightly tested.

            Yes, ext3's data=ordered pushes out tons of file data on every journal commit. That has serious performance implications, but it does shorten the window for buffered data loss to the journal commit time.

            You want data persistence with a posix filesystem? Use the proper data integrity syscalls, that's all there is to it.

            • Re:Happy with XFS (Score:4, Interesting)

              by bored ( 40072 ) on Friday April 26, 2013 @02:20PM (#43560289)

              Without sounding like too much of a jerk, I have hundreds of commits in the linux-2.6 fs/* tree. This is what I do for a living.

              Well, then your part of the problem. Your idea that you have to be correct or fast is sadly sort of wrong. Its possible to be correct without completely destroying performance. I have a few commits in the kernel as well mostly to fix completely broken behavior (my day job in the past was working on an enterprise unix). So, I do understand filesystems too. Lately, my job has been to replace all that garbage, from the scsi midlayer up, so that a small industry specific "application" can both make guarantees about the data being written to disk while still maintaining many GB/sec of IO. The result, actually makes the whole stack look really bad.

              So, I'm sure your aware that on linux, if you use proper posix semantics (fsync() and friends) the performance is abysmal compared to the alternatives. This is mostly because of the "broken" fencing behavior (which has recently gotten better but still is far from perfect) in the block layer. Our changes depend on 8-10 year old features available in SCSI to make the guarantees that aren't available everywhere. But it penalizes devices which don't support modern tagging, ordering and fencing semantics rather than ones that do.

              Generally in linux, application developers are stuck either dealing with orders of magnitude performance loss, or they have to play games in an attempt to second guess the filesystem. Neither is a good compromise and its sort of shameful.

              Maybe its time to admit linux needs a filesystem that doesn't force people to choose either abysmal performance, or no guarantees about integrity.

      • by Anonymous Coward

        The problem with "XFS" eating data wasn't with XFS - it was with the Linux devmapper ignoring filesystem barrier requests. [nabble.com]

        Gotta love this code:

        Martin Steigerwald wrote:
        > Hello!
        >
        > Are write barriers over device mapper supported or not?

        Nope.

        see dm_request(): /*
        * There is no use in forwarding any barrier request since we can't
        * guarantee it is (or can be) handled by the targets correctly.
        */
        if (unlikely(bio_barrier(bio))) {
        bio_endio(bio, -EOPNOTSUPP);
        return 0;
        }

        Who's the clown who thought THAT was acceptable? WHAT. THE. FUCK?!?!?!

        And it wasn't just devmapper that had such a childish attitude towards file system barriers [lwn.net]:

        Andrew Morton's response tells a lot about why this default is set the way it is:

        Last time this came up lots of workloads slowed down by 30% so I dropped the patches in horror. I just don't think we can quietly go and slow everyone's machines down by this much...

        There are no happy solutions here, and I'm inclined to let this dog remain asleep and continue to leave it up to distributors to decide what their default should be.

        So barriers are disabled by default because they have a serious impact on performance. And, beyond that, the fact is that people get away with running their filesystems without using barriers. Reports of ext3 filesystem corruption are few and far between.

        It turns out that the "getting away with it" factor is not just luck. Ted Ts'o explains what's going on: the journal on ext3/ext4 filesystems is normally contiguous on the physical media. The filesystem code tries to create it that way, and, since the journal is normally created at the same time as the filesystem itself, contiguous space is easy to come by. Keeping the journal together will be good for performance, but it also helps to prevent reordering. In normal usage, the commit record will land on the block just after the rest of the journal data, so there is no reason for the drive to reorder things. The commit record will naturally be written just after all of the other journal log data has made it to the media.

        I love that italicized part. "OMG! Data integrity causes a performance hit! Screw data integerity! We won't be able to brag that we're faster than Solaris!"

        See also http://www.redhat.com/archives/rhl-dev [redhat.com]

        • by h4rr4r ( 612664 )

          Data integrity is fine, if you are not running XFS.

          Why should everyone suffer a 30% performance hit, to make the couple oddballs running XFS happy?

      • Re:Happy with XFS (Score:5, Interesting)

        by Kz ( 4332 ) on Friday April 26, 2013 @10:37AM (#43556909) Homepage

        Your happy with XFS because your machine has never lost power or crashed. If either of those things happened with the older versions of XFS it was nearly a 100% guarantee you would lose data. Now i'm told its more reliable.

        It _is_ quite reliable, even on the face of hardware failure.

        Several years ago, I hit the 8TB limit of ext3 and had to migrate to a bigger filesystem. ext4 wasn't ready back then (and still today it's not easy to use on big volumes). Already had bad experiences with reiserfs (which was standard on SuSE), and the "you'll lose data"warnings on XFS docs made me nervous. It was obviously designed to work on very high-end hardware, which I couldn't afford.

        so, I did extensive torture testing. hundreds of pull-the-plug situations, on the host, storage box and SAN switch, with tens of processes writing thousands of files on million-files directories. it was a bloodbath.

        when the dust settled, ext3 was the best by far, managing to never lose more than 10 small files in the worst case, over 70% of the cases recovered cleanly. XFS was slightly worse, never more than 16 lost files and roughly 50% clean recoveries. ReiserFS was really bad, always losing more than 50-70 files and sometimes killing the volume. JFS didn't lose the volume, but lost files count never went below 130, sometimes several hundred.

        needless to say, i switched to XFS, and haven't lost a single byte yet. and yes, there has been a few hardware failures that triggered scary rebuilding tasks, but completed cleanly.

        • by gmack ( 197796 )

          XFS is mostly reliable but, as I found out with several PCs, if it gets shut off at the wrong time it will need a disk repair and then you are in for some fun because their repair utility doesn't work at all on a mounted FS (even if it is read only) meaning to repair a damaged XFS volume you will now need to use a boot disk.

    • by Hatta ( 162192 )

      XFS doesn't checksum, support copy-on-write, etc.

      • by jabuzz ( 182671 )

        On the other hand the code was first released as production nearly 20 years ago. Of all the current Linux file systems XFS has the best performance, the best scalability and the best stability.

        Want to put 100TB of data on btrfs be my guest.

    • I've been using it for a long time, too, it's a perfectly respectable choice, and if I had to use it for ten more years, that would be OK.

      However, particularly for back-up systems, I am ready for snapshots and block-level deduplication. I tried to deploy something like this with XFS over LVM a few years ago, but discovered that the write performance of LVM snapshots degrades rapidly when there are a lot of them, and it helps a lot if you can guess the size in advance, which is hard. There's also a hard limi

  • I think we need to talk about the oracle in the woodpile - ie, Oracle. BTRFS is an Oracle project. What happens when it goes the way of MySQL? Will Monty Wideanus appear on a white steed to save us?

  • ZFS (Score:5, Informative)

    by 0100010001010011 ( 652467 ) on Friday April 26, 2013 @09:21AM (#43555751)

    Meanwhile ZFS announced that it was ready for production [theregister.co.uk] last month.

    http://zfsonlinux.org/ [zfsonlinux.org]

    • Re:ZFS (Score:5, Insightful)

      by h4rr4r ( 612664 ) on Friday April 26, 2013 @09:26AM (#43555809)

      It will be ready for production when it can be distributed with the kernel.

      Do you really want to depend on an out of tree FS?

      • Re:ZFS (Score:4, Interesting)

        by Bill_the_Engineer ( 772575 ) on Friday April 26, 2013 @09:30AM (#43555883)
        Incompatible license prevents ZFS inclusion with the kernel. This is why Btrfs exists and explains Oracle's involvement with both.
        • Re:ZFS (Score:5, Insightful)

          by h4rr4r ( 612664 ) on Friday April 26, 2013 @09:33AM (#43555915)

          Correct sir.
          My point still stands though. Even though the limitation keeping it from being seriously considered for production is caused by a legal issue not a technical one.

      • It will be ready for production when it can be distributed with the kernel.

        ZFS is not included in the Linux kernel because it is not GPL compatible.
        Licensing has nothing to do with how production-ready a product is. ZFS is significantly [rudd-o.com] more mature than btrfs.

        • by h4rr4r ( 612664 )

          Yes, but the statement is still true.

          It means you will not get updates via normal channels, or normal channel updates might break it. That simply is not something most datacenters want to deal with. ZFS is more mature on Solaris and BSD, on Linux today it might be ahead of btrfs, but neither is production ready in the sense that datacenters mean it.

          • by Guspaz ( 556486 )

            No, the statement is false. There are no licensing issues to including the zfsonlinux kernel module with distros. The precedent on kernel module licensing has been long set by things like nVidia drivers, and zfs uses a free software license that enables distribution. Some distros like Gentoo already do include zfsonlinux, and I imagine more will in the future. On these distros, you WILL get updates via normal channels.

            If you define "distributed with the kernel" to say "this distribution includes both the ke

            • by h4rr4r ( 612664 )

              Are any of these Enterprise distros?
              I don't know of any of those that distribute any of the kernel modules are speaking of.

              Gentoo is linux for ricers.

              • by Guspaz ( 556486 )

                Gentoo today, who knows what else tomorrow. I'm not a fan of Gentoo, but the fact ZFS is being included in any distros show that claims that licensing prevents distro inclusion are FUD. You can probably make a bunch of legitimate arguments about being out of tree, or kernel taint when you load it, or who knows what else, but ability to distribute isn't one of the problems.

                • by h4rr4r ( 612664 )

                  Or that Gentoo is making a big mistake.

                  I would be happy to see it in Debian, that would get rid of any doubt I had.

          • It means you will not get updates via normal channels, or normal channel updates might break it. That simply is not something most datacenters want to deal with.

            But it's something datacenters will have to deal with anyway. After all, there's no guarantee that any update won't break something, so they'll need an internal update server that only gets vetted and tested updates - and at that point it's not much of a bother to include out-of-tree patches, assuming of course that they give a significant advantag

            • by h4rr4r ( 612664 )

              Sure, but the odds of breakage are lower and you don't lose support in that case anyway.

              Tell RH support you are using a non-supported FS and watch them hang up on you.

        • The reason *why* ZFS doesn't ship with the kernel is mostly irrelevant. The fact remains--in order to use ZFS in Linux, you have to roll your own custom system. This is not a good thing for production.

          • by Guspaz ( 556486 )

            People keep saying stuff like this, but it's just FUD. zfsonlinux exists as a kernel module, this isn't zfs-fuse anymore. Installing it on a common distro like Debian that doesn't include it via the package management system requires two commands (add repo, install package). Some distributions like Gentoo already include zfsonlinux as part of the distro, and this will undoubtedly increase as time goes on.

            There are no more legal or technical problems with zfsonlinux than something like the nVidia drivers. Le

            • by h4rr4r ( 612664 )

              That first command is your problem.
              Not in the normal repos not getting installed.

              No one installs the closed nVidia drivers on production machines.

              • by Guspaz ( 556486 )

                It's in the normal Gentoo repos. I recall another distro it was in, but I don't remember the name (started with an S?) As it continues to mature, I find it likely that we'll see it included in more distros.

              • Re:ZFS (Score:4, Insightful)

                by wagnerrp ( 1305589 ) on Friday April 26, 2013 @01:25PM (#43559589)
                Anyone using nVidia GPUs for compute cards in a data center is using the closed nVidia drivers. Anyone not using them for that purpose likely doesn't even have any nVidia hardware in the first place.
            • You missed two steps, the full process is:

              1: Add non-standard repo
              2: Kiss distro maintainer support goodbye.
              3: Install package
              4: Kiss kernel developer support goodbye (kernel tainted: disabling lock debugging)

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        It will be ready for production when it can be distributed with the kernel.

        Do you really want to depend on an out of tree FS?

        That's why the fileserver runs FreeBSD. Has other benefits, too.

    • May have to settle for this, I really need a modern filesystem that supports deduplication and my experiments with btrfs is early 2012 didn't go so well:

      http://slashdot.org/journal/285321/my-btrfs-dedupe-script [slashdot.org]

  • Those distros such as SuSE Linux Enterprise Server, that claim it was production ready and have it in the install, should be shunned. Don't entrust your data to them

  • Actually I'm being serious. This is why I come to /.
  • I still prefer XFS ;-).

  • by sshir ( 623215 ) on Friday April 26, 2013 @10:48AM (#43557067)
    Installed Xubuntu 12.10 last October(ish) on USB2 stick (jetflash 32G) with Btrfs (only /boot had EXT2 partition, no swap)

    Reason: 24/7 machine. It's a notebook - always spinning harddrive is a drag: spins up cooling fun; so I went solid state for primary OS drive.Needed filesystem that spreads wear and does checksums - hence Btrfs.

    Usage - downloading stuff (to the stick itself, not the harddrive) plus some NASing. Data volume: wrapped around those 32gigs few times already.

    Observations so far: no problems at all.

    Other details: Had to play with I/O scheduler (I think settled on CFQ. Interestingly, NOOP sucked). Had to install hdidle (I think) otherwise couldn't force sda to go to sleep (bug (?)).
  • by Luke_22 ( 1296823 ) on Friday April 26, 2013 @11:47AM (#43558065)

    I tried btrfs as my main laptop filesystem:

    nice features, speed ok, but i happened to unplug by mistake the power supply, without a battery. bad crash... I tried using btrfsck, and other debug tools, even in the "dangerdon'teveruse" git branch, they just segfaulted. at the end my filesystem was unrecoverable, I used btrfs-restore, only to find out that 90% of my files had been truncated to 0... even files i didn't use for months....

    now, maybe it was the compress=lzo option, or maybe I played a little too much with the repair tools (possible), but untill btrfs can sustain power drops without problems, and the repair tools at least do not segfault, I won't use it for my main filesystem...

    btrfs is supposed to save a consistent state every 30 seconds, so I don't understand how I messed up that bad.... maybe the superblock was gone and the btrfsck --repair borked everything, I don't know.... luckily for me: backups :)

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...