Forgot your password?
typodupeerror
Data Storage IT Linux

Btrfs Is Getting There, But Not Quite Ready For Production 268

Posted by timothy
from the delicious-on-popcrnfs dept.
An anonymous reader writes "Btrfs is the next-gen filesystem for Linux, likely to replace ext3 and ext4 in coming years. Btrfs offers many compelling new features and development proceeds apace, but many users still aren't sure whether it's 'ready enough' to entrust their data to. Anchor, a webhosting company, reports on trying it out, with mixed feelings. Their opinion: worth a look-in for most systems, but too risky for frontline production servers. The writeup includes a few nasty caveats that will bite you on serious deployments."
This discussion has been archived. No new comments can be posted.

Btrfs Is Getting There, But Not Quite Ready For Production

Comments Filter:
  • Re:Happy with XFS (Score:4, Insightful)

    by h4rr4r (612664) on Friday April 26, 2013 @10:18AM (#43555727)

    It also has none of the features that make Btrfs exciting and modern.

    XFS is fine, so is Ext3/Ext4, but Linux need a modern file system.

  • Re:ZFS (Score:5, Insightful)

    by h4rr4r (612664) on Friday April 26, 2013 @10:26AM (#43555809)

    It will be ready for production when it can be distributed with the kernel.

    Do you really want to depend on an out of tree FS?

  • Re:ZFS (Score:5, Insightful)

    by h4rr4r (612664) on Friday April 26, 2013 @10:33AM (#43555915)

    Correct sir.
    My point still stands though. Even though the limitation keeping it from being seriously considered for production is caused by a legal issue not a technical one.

  • Re:Why? (Score:5, Insightful)

    by h4rr4r (612664) on Friday April 26, 2013 @10:37AM (#43555975)

    ZFS is outside the kernel tree. That is not an ideological issue, but a practical one. It means updates will not come from the normal channels, it means kernel updates form normal channels could break it and it is not getting the attention from the kernel devs an fs should get.

    ZFS on linux has probably less testing than Btrfs at this point. It has near no real world testing. Just because the Solaris ZFS is great, and the BSD one is coming along means nothing for the stability and correctness of the Linux port.

    If you want to use a different OS than this entire discussion is worthless. You might as well suggest switching everything to OSX and using HFS+.

  • Re:ZFS (Score:2, Insightful)

    by Anonymous Coward on Friday April 26, 2013 @10:39AM (#43555999)

    It will be ready for production when it can be distributed with the kernel.

    Do you really want to depend on an out of tree FS?

    That's why the fileserver runs FreeBSD. Has other benefits, too.

  • by h4rr4r (612664) on Friday April 26, 2013 @10:50AM (#43556169)

    Ext3 is still chugging along and doing what you want. A filesystem that sacrifices everything for stability.

    Not everyone has the same wants and needs. Lots of competing filesystems is a good thing, it leads to a market of ideas. Your lets pick one and force everyone to suffer with our choice just leads to stagnation and even worse results.

  • Re:Happy with XFS (Score:5, Insightful)

    by bored (40072) on Friday April 26, 2013 @11:10AM (#43556483)

    No, that's FUD and/or misunderstanding on your part.

    "data=ordered" is ext3/4's name for "don't expose stale data on a crash," something which XFS has never done,

    Actually, I think your the one that doesn't understand how a journaling file system works. The problem with XFS has been that it only journals meta data, and the data portions associated with the metadata are not synchronized with the metadata updates (delayed allocation an all that). This means the metadata portions (filename, sizes, etc) will be correct based on the last journal update flushed to media, but the data referenced by that meta-data may not be.

    A filesystem that is either ordering its meta data/data updates against a disk with proper barriers, or journing the data alongside the meta data doesn't have this problem. The filesystem _AND_ its data remain in a consistent state.

    So, until your understand this basic idea, don't go claiming you know _ANYTHING_ about filesystems.

  • by isopropanol (1936936) on Friday April 26, 2013 @11:20AM (#43556631) Journal

    Also, read the article. The authors were experimenting and came across some bugs in some pretty hairy edge cases (hundreds of simultaneous snapshots, large disk array suddenly becoming full, etc) that did not cause data loss. They eventually decided not to use BTRFS on one type of system but are using it on others.

    To me, the article was a good thing... But I would have preferred if it was worded as here are some edge case bugs that need fixing before BTRFS is used in our scenario, rather than that these were show stoppers... Because these are not likely show stoppers to anyone who's not implementing the exact same scenario.

    Also It sounds like they should jitter the start time of the backups...

  • by Tough Love (215404) on Friday April 26, 2013 @11:41AM (#43556975)

    Bugs are like roaches. If you see one, you can be sure there are many others hiding in the cracks. There is no room for any bugs at all in a filesystem to which you will trust your essential data.

  • by Harik (4023) <Harik@chaos.ao.net> on Friday April 26, 2013 @12:46PM (#43558049)

    It's an issue with any CoW filesystem being full - in order to delete a file, you need to make a new copy of the metadata that has the file removed, then a copy of the entire tree leading up to that node then finally copy the root - and once the root is committed, you can free up the no-longer in-use blocks. At least, as long as they're not still referenced by another snapshot.

    The alternative is to rewrite the metadata in place and just cross your fingers and hope you don't suffer a power loss at the wrong time, in which case you end up with massive data corruption.

    I've filled up large (for home use) BTRFS filesystems before - 6-10tb. The code does a fairly good job about refusing to create new files that would fill the last remaining bit so it leaves room for metadata CoW to delete. The problem may come from having a particularly large tree that requires more nodes to be allocated on a change then were reserved - in which case the reservation can be tuned.

    BTRFS isn't considered 'done' by any means. It was only in the 3.9 kernel that the new raid5/6 code landed, and other major features (such as dedup) are still pending. It's actually very encouraging that a work-in-progress filesystem is as solid as it is already.

  • Re:ZFS (Score:4, Insightful)

    by wagnerrp (1305589) on Friday April 26, 2013 @02:25PM (#43559589)
    Anyone using nVidia GPUs for compute cards in a data center is using the closed nVidia drivers. Anyone not using them for that purpose likely doesn't even have any nVidia hardware in the first place.

One picture is worth 128K words.

Working...