Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Open Source Linux

Slashdot Asks: How Do You Feel About Btrfs? (linuxjournal.com) 236

emil (Slashdot reader #695) shares an article from Linux Journal re-visiting the saga of the btrfs file system (initially designed at Oracle in 2007): The btrfs filesystem has taunted the Linux community for years, offering a stunning array of features and capability, but never earning universal acclaim. Btrfs is perhaps more deserving of patience, as its promised capabilities dwarf all peers, earning it vocal proponents with great influence. Still, [while] none can argue that btrfs is unfinished, many features are very new, and stability concerns remain for common functions.

Most of the intended goals of btrfs have been met. However, Red Hat famously cut continued btrfs support from their 7.4 release, and has allowed the code to stagnate in their backported kernel since that time. The Fedora project announced their intention to adopt btrfs as the default filesystem for variants of their distribution, in a seeming juxtaposition. SUSE has maintained btrfs support for their own distribution and the greater community for many years.

For users, the most desirable features of btrfs are transparent compression and snapshots; these features are stable, and relatively easy to add as a veneer to stock CentOS (and its peers). Administrators are further compelled by adjustable checksums, scrubs, and the ability to enlarge as well as (surprisingly) shrink filesystem images, while some advanced btrfs topics (i.e. deduplication, RAID, ext4 conversion) aren't really germane for minimal loopback usage. The systemd init package also has dependencies upon btrfs, among them machinectl and systemd-nspawn . Despite these features, there are many usage patterns that are not directly appropriate for use with btrfs. It is hostile to most databases and many other programs with incompatible I/O, and should be approached with some care.

The original submission drew reactions from three disgruntled btrfs users. But the article goes on to explore providers of CentOS-compatible btrfs-enabled kernels, ultimately opining that "There are many 'rough edges' that are uncovered above with btrfs capabilities and implementations, especially with the measures taken to enable it for CentOS. Still, this is far better than ext2/3/4 and XFS, discarding all the desirable btrfs features, in that errors can be known because all filesystem content is checksummed." It would be helpful if the developers of btrfs and ZFS could work together to create a single kernel module, with maximal sharing of "cleanroom" code, that implemented both filesystems... Oracle is itself unwilling to settle these questions with either a GPL or BSD license release of ZFS. Oracle also delivers a btrfs implementation that is lacking in features, with inapplicable documentation, and out-of-date support tools (for CentOS 8 conversion). Oracle is the impediment, and a community effort to purge ZFS source of Oracle's contributions and unify it with btrfs seems the most straightforward option... It would also be helpful if other parties refrained from new filesystem efforts that lack the extensive btrfs functionality and feature set (i.e. Microsoft ReFS).

Until such a day that an advanced filesystem becomes a ubiquitous commodity as Linux is as an OS, the user community will continue to be torn between questionable support, lack of features, and workarounds in a fragmented btrfs community. This is an uncomfortable place to be, and we would do well to remember the parties responsible for keeping us here.

So how do Slashdot's readers feel about btrfs?
This discussion has been archived. No new comments can be posted.

Slashdot Asks: How Do You Feel About Btrfs?

Comments Filter:
  • Still not mature (Score:5, Interesting)

    by esperto ( 3521901 ) on Saturday October 24, 2020 @07:49PM (#60644506)
    About a year ago I decided to try as it seemed to be mature enough, but a few weeks later I had a power out, had to fsck all my partitions and the BTRFS one was unrecoverable, if I didn't have a backup would have lost almost 3TB of data. From my experience it still needs testing.
    • by gweihir ( 88907 )

      Does not surprise me. This thing needs a few years more of actual development and stabilizing before you can depend on it. I use ext3 and (well, ext4 running as ext3) and that works just fine as I do not have any special workloads.

      • I use ZFS. Iâ(TM)ve used it on massive storage arrays. Itâ(TM)s fantastic. Iâ(TM)m not sure why one would chose BTRFS over it.
        • Re: Still not mature (Score:5, Interesting)

          by ctilsie242 ( 4841247 ) on Sunday October 25, 2020 @03:50AM (#60645484)

          CDDL license mainly. RedHat views that license as not free, so doesn't include ZFS as an option. Ironically, Ubuntu is working ZFS into the mainstream, and ships an experimental install from scratch that allows ZFS on /boot (the bpool), and everything else on a newer ZFS version, (the rpool).

          ZFS just takes the cake for a Linux filesystem for general purposes.

          As for btrfs, if used on top of md-raid, which Synology does for a number of their drives, it doesn't have any write hole issues, and can detect (but not fix) bit rot.

          As for RHEL not using it, part of it is their Stratis product which is supposed to allow feature parity with ZFS/BTRFS while still using LVM and XFS. So far, I've not read much cheering Stratis on, and I'm seeing people using ZFS on Linux on RHEL and CentOS, even in production.

          • by rl117 ( 110595 )

            > RedHat views that license as not free, so doesn't include ZFS as an option

            This isn't quite correct. The CDDL is a free software licence (it's an MPL derivative). There's absolutely nothing wrong with the CDDL--it's a genuinely free software licence. The FSF also regard it as a free software licence.

            The issue is that the GPL is incompatible with the CDDL. The CDDL itself is compatible with anything, including proprietary licences.

        • Very simple reason: (Score:5, Interesting)

          by BAReFO0t ( 6240524 ) on Sunday October 25, 2020 @05:26AM (#60645628)

          In my experience, ZFS takes 1GB of RAM per TB of disk space.

          Whicb may be fine for your file server,
          but is *insane* for the home user.

          And I've tried every solution to lower that.
          They are all horrible hacks that make ZFS less reliable or much slower or both, ruining its whole point.

          I want a ZFS that I can use for my single-board computer's SD and SSD drive, and not have a noticeable impact on the available RAM or speed.

          Yeah, I'm not going to ever touch the btrfs trainwreck again, but I'm not going to run ZFS on anything other than a NAS where performance is so important that 32GB of RAM are justified either.

          • by DeHackEd ( 159723 ) on Sunday October 25, 2020 @06:52AM (#60645748) Homepage

            This is not literally true any more than it is for ext4, xfs etc. ZFS has its own cache that doesn't appear as normal cache memory on Linux, but like any other cache it will grow as needed, shrink under pressure and will largely consume most available RAM if nothing else wants it. Now yes there have been some interaction issues with the Linux memory subsystem resulting in less-than-ideal behaviours, the fixes for which are scheduled for the 2.0 release, but it's trying its best.

            "1 GB of RAM per 1 TB of storage" is more of a reminder that big data (especially in the enterprise scenario) usually means busy data which means more cache can be hugely beneficial. Remember ZFS was originally designed as an enterprise filesystem. Don't blow all your money on hard drives and forget to make the PC/server adequate for the job as well... but if it's a low throughput media server for a small family go ahead and give it 4 GB of RAM and fill'er up with disks.

          • Another performance issue with ZFS is that its write performance goes to total crap if you fill the file system more than about 80% full. That's an issue for media servers where the intent is to be an archive of mostly static content. You have to waste a bunch of space unless you're prepared for it to take FOREVER to fill it from the 80% point to the 95% point.
            • by Bengie ( 1121981 )
              Called the 80/20 rule. Applies to virtually all things. Over 80% network saturation and packet-loss starts to happen, over 80% memory usage and the OS has to start swapped out due to memory fragmentation, etc etc. ZFS just gets hit harder with it's variable block sizes and wanting larger blocks for high throughput. Small data blocks are expensive with ZFS because of meta-block overhead. And being a CoW FS, you need free space just to delete data.
          • I have been running two 8-drive raidz2 arrays for 15 years without data loss, across MacZFS, Linux, Windows through a VM with physical device passthrough and several iterations of ARM. It works completely fine with a 1gb RAM total limit with 30TB of data. It works completely fine when the devices are attached via eSATA with PMP and even with USB3. You just need to tune the limits.

      • by rl117 ( 110595 )

        It's been needing "a few years more" ever since the start. It wasn't production-ready a decade back, and it's still not production-ready today.

        At what point, and by what criteria, do we conclude it is never actually going to be ready, because its design and implementation are fundamentally compromised? I doubt it will ever reach the point where it is both safe to use and performant. Its entire history is a sad saga of dataloss bugs and absurd performance problems. For a filesystem whose main point is da

        • by gweihir ( 88907 )

          Well, I think it will never make it. The people behind it just do not have what it takes. For regular use, ext2/3/4 is fine. For advanced requirements, nobody sane is going to look at Btrfs.

    • Re: (Score:3, Insightful)

      by Rockoon ( 1252108 )
      These "overly" complicated file systems...

      Even when the linux driver is finally up to snuff, that still just leaves you with only linux support, and even there its spotty. They've lost the forest while sizing up all those trees.

      One could also very easily argue that these file systems violate unix philosophy, for these filesystems are not trying to do just one thing well.

      Still further, the granularity of the features of these complicated file systems is typically all or nothing, so you get the complica
      • They've lost the forest while sizing up all those trees.

        No they haven't. They've just realised that we don't farm wood the same way we did in the early days of computing. There some very big realworld benefits to these filesystems, especially in the enterprise.

        Now as to if you the home user requires the complexity, that's an entirely different question.

        That temporary file I just created doesnt need the extra bits for error recovery, nor does there need to be any journaling, I just dont care about previous version of the file, and no I wont want to undelete it later.

        If you're making such decisions about each of your files then there's a large chance that it is you who has actually lost sight of the forest through the trees. These complex file systems can be setup with very sa

        • The features pushed for BTRFS are:
          Snapshots
          Compression
          Deduplication
          Checksums
          Resize (just like every other FS?)

          It seems to me that the enterprise storage architect who needs snapshots should be able to do:
          lvcreate ---snapshot --name mainshares_snap

          That is, they can create snapshots with the standard Linux storage stack.

          If they want compression and deduplication at whichever level, they can decide which level and run:
          yum install vdo kmod-kvdo

          They'll have the default check in their cron, with the standard stac

          • It seems to me that the enterprise storage architect who needs snapshots should be able to do:
            lvcreate ---snapshot --name mainshares_snap

            I don't know how it is right now (because I use zfs for this), but it used to be that having snapshots active with logical volumes would kill write performance.

            My experience is with zfs, not btrfs, but zfs has a cool feature where I can copy the difference between two snapshots (for backup etc). Doing the same with lvm meant a script had to read the entire volume to find the differences - copying 1GB of difference between two snapshots of a 1TB volume is much, much faster with zfs.

            • by raymorris ( 2726007 ) on Sunday October 25, 2020 @07:57AM (#60645864) Journal

              There are two ways to find the differences - either track the changes as they are made (which impacts write performance to some degree) or figure them out later, when you ask for the differences (which takes a few seconds or more at the time you ask for the differences). *All* systems have to do one or the other, or a combination of the two. No system can know what the differences are unless it either tracks the changes as they are made, or finds those differences later. LVM can operate in either mode, at your option. It can also "find the differences later" very quickly by using some metadata. For every xMB you write of changes, it writes a few bytes of metadata indicating which extent you changed, so write performance is unaffected (less than 1%) but finding the differences is very fast.

              With the standard stack (dm/lvm), the conventional (older style) snapshot *is* the differences. That's what a snapshot is, a volume that holds the differences between what was and what is, plus some metadata mapping a name to the pair.

              If you list the volumes with la /dev/mapper/myvg/myvol* you'll see one with a name that ends in "-cow". That cow volume is the differences. The "snapshot", the "what it used to be" is a logical construction of origin volume + cow.

              That's why writes can be slower in some cases if you use that style - because a write to the origin consists of first copying the old extent to the snapshot.

              If you primarily want to work with "the differences" and you want to have the differences available instantly rather than waiting a few seconds you can use that type of snapshot by setting a maximum size for the snapshot.

              With the newer style thin snapshots, write performance is unaffected because the relationship between the origin and the snapshot is a metadata construction. On write, the old extent is assigned to the snapshot rather than copied. See, a volume is really just a list of extents (data blocks). So with run snapshots it just updates the list to say that data block now belongs to the snapshot. Finding the differences is just a matter of comparing the metadata to see which extents are in one volume and not the other. The metadata is typically kilobytes, so that comparison takes maybe 1 second. That gives you the list of changed extents, which is a volume of "the differences".

              • Hmm... I guess I used the old style snapshots then. The new style appears to be more similar to how zfs works.

                The reason for wanting to have differences between two snapshots is for backups. I can create snap1 and transfer it to the backup server, but also leave it at the origin server. Then, I can create snap2 and copy the difference between snap1 and snap2 to the backup server then delete snap1 from t the origin.

                With how zfs works, this allows me to effectively do a "full" backup once and then do only inc

        • If the home consumer runs a NAS, (and should this FS ever become actually mature enough to.. you know.. USE, and use reliably, which it most certainly is not at this point in time) it would have immense and immediate utility. (Block deduplication, configurable compression with a definable compression method, snapshots, etc...)

          Been wanting a good compressed filesystem for long-term archival of things like disk images for some time in a consumer grade product. The paragon NTFS driver DOES support NTFS compr

          • Some home NAS models do use it. Synology has some models which use md-raid for the heavy lifting (to ensure write holes do not happen), and use btrfs for the top filesystem. This allows for quick snapshots, compression, and bit rot detection.

      • It's not just that Btrfs is complex, it is also a fundamentally poor design. If you could just wave your hands and make the refcount overhead go away then maybe everything would work out fine, but you can't and it doesn't. So instead they lather on piles of complexity in attempting to get the thing up to Ext4 speed, and now it's way too rambling and opaque for anybody to really understand its behavior in detail. Hence more than a decade of patching up the most obvious bugs without every fixing the deep stru

    • by rnturn ( 11092 )
      Your experience is interesting. I've had a 2TB btrfs RAID configuration running for close to three years and it's never had a problem when there are power outages. This is for data only, though---no operating system. Perhaps that's the difference.
    • Re:Still not mature (Score:4, Interesting)

      by Bengie ( 1121981 ) on Sunday October 25, 2020 @11:30AM (#60646428)
      The whole BTRFS still needs fsck tells me everything I need to know about it. ZFS doesn't ever need anything like fsck. Either it transparently handles an error because of its CoW design and duplicate blocks, or the error is so bad that there is no sane way to fix it. Essentially, in the absolute worst case ZFS can always rollback to the last known good version of the file system. How BTRFS manages to be a greenfield CoW file system made after ZFS and didn't manage to do versioned CoW is beyond me.

      I've read some pretty good blogs detailing the issues that some sysadmins have with BTRFS. It pretty much comes down to a combination massive dose of second-system effect and a superficial understanding of the problem domain. BTRFS was made by devs for devs with no input from sysadmins who have to manage petabyte sizes datasets and the kinds of issues, both management and recovery, they have to deal with.

      Really. ZFS has feature like independently configurable duplicate metadata and data blocks. Not only does ZFS have redundant data via RAIDZ, but it by default has 2 copies of every metadata block, which is any block not involved with holding file data. These blocks are stored at intelligent large offsets from each other so they can be easily found and they are highly unlikely to be affected by failure modes like a 16MiB SSD page being modified during a read-modify-write cycle and power loss. If a block is detected being corrupt, ZFS can check one of the duplicates and copy it over the corrupt one to fix it. Imagine your single non-RAID SSD corrupts all of the data within a 16MiB page, and transparently recovering with no data loss. Configurable redundancy for single drives.

      Of course this uses more data, but that's up to the user to decide. For my pfSense use-case, I have 2 large cheap first-gen TLS 100GiB+ SSDs in RAIDZ with little chance of drive failure, but a high risk of transient dataloss or corruption from power loss from this cheap drives. I have all of the data configured in triplicate on each drive, for a total of 6 copies of the meta+file data. Many people say you don't need to do this with pfSense since it is trivial to create a new boot imagine and import the last config backup. But I don't want to waste my time. pfSense comes up instantly after every power outage as if nothing happened. And being only 1% full SSDs, entire pool scrubs take 1-3 seconds.

      ZFS does have its bugs, issues, limitations, and gotchas, but is was designed from software engineers with decades of experience dealing with data recovery in enterprise settings.
  • OpenZFS (Score:5, Informative)

    by darkain ( 749283 ) on Saturday October 24, 2020 @07:51PM (#60644512) Homepage

    Oracle ZFS is NOT OpenZFS. Just like Oracle MySQL is NOT MariaDB. And Oracle JDK is not OpenJDK. And Oracle Solaris is not Illumous.

    These projects are all forked. Stop using Oracle as some bullshit excuse in arguments. Each of these projects have been doing just fine based on pre-acquisition forks.

    That said, BTRFS, while more feature-rich than EXT4 or XFS, is still ~10 years behind on enterprise features compared to OpenZFS. Do the overwhelming vast majority of users need these enterprise features? Probably not. But OpenZFS has a proven reliability track that BTRFS is lacking as well.

    Will OpenZFS ever land in Linux kernel upstream? Probably not, due to Linus being stubborn, especially around not knowing the difference between Oracle ZFS (who he has bitched about in the past) vs OpenZFS. But does it work great there? For sure. Ubuntu now ships with it, and there has been zero legal disputes. It is used in some of the worlds largest super computers. It is used on Raspberry Pis. It works at all scales. Just tell Linus to pull his head out of his ass, and then we'll have it in upstream Linux.

    • Re:OpenZFS (Score:5, Informative)

      by Tough Love ( 215404 ) on Saturday October 24, 2020 @08:11PM (#60644558)

      You seem a bit out of touch. The problem with ZFS is the license, which was made intentionally incompatible with GPL by Oracle and that can't be changed by OpenZFS short of a ground up rewrite, which isn't going to happen any time soon. So Oracle is still the bad actor here. Linus stubbornly upholding the GPL... sure, that's his right and I would go so far as to say his duty. If he didn't uphold the GPL then a thousand other kernel copyright holders would.

      If you want ZFS in the linux kernel then get started on your rewrite. Be prepared to commit a dozen engineers to it for five years at least, and that is if all the patents are expired. You will need a attornies full time on the job and even then it won't be clear, that's how it works. You know how Oracle is, they will sue even when their case is founded on creative nonsense. Asshole Ellison is that type of guy.

      • by darkain ( 749283 )

        As stated, OpenZFS is a fork from pre-acquisition. Oracle holds no rights to the OpenZFS source code, just the same as they don't hold the rights to MariaDB, OpenJDK, LibreOffice, or Illumous. Sun open-sourced each, and each were forked with their respective F/OSS licenses before Oracle acquired Sun.

        • Note: they still hold the rights to the OpenZFS code (they bought it from Sun). Unfortunately (for them) one of the rights they've relinquished is the right to sue people who copy it. They still retain the right to release it under a different license (although the people who have already received the code will keep the original license).

          tl;dr they've open sourced it but own it, and they can't take it back.
          • one of the rights they've relinquished is the right to sue people who copy it.

            Link, please.

            • That's basically what the CDDL means (it's a little more nuanced than that, of course, just like the GPL). Here's some discussion on it [openzfs.org].
              • In other words you are unable to support your claim about relinquished rights. You ought to retract it. Somebody might make the mistake of taking it seriously.

                • I think you're a little confused. What exactly do you think it means when you release code as open source?

                  You ought to retract it.

                  No, because I'm right.

                  • I read your link. Your link says that there are many copyright holders of OpenZFS and so the CDDL license cannot be changed. That means that any of many copyright holders could sue you if you attempted to distribute OpenZFS under the GPL.

                    One of those copyright holders is Sun Microsystems. Sun Microsystems, together with all of its source code were purchased by Oracle [oracle.com] which means that Oracle has the same rights over that code as Sun Microsystems did. In other words, if you attempt to distribute OpenZFS u

        • Oracle holds no rights to the OpenZFS source code

          This is disinformation, please stop spreading it. OpenZFS is a derivative work of ZFS. Oracle holds the ZFS copyrights. Please do some research to understand the implications.

          • Re:OpenZFS (Score:5, Informative)

            by TrekkieGod ( 627867 ) on Saturday October 24, 2020 @11:10PM (#60645018) Homepage Journal

            OpenZFS is a derivative work of ZFS. Oracle holds the ZFS copyrights. Please do some research to understand the implications.

            You are absolutely correct, and he phrased it incorrectly. However, the implications of the license the ZFS code contained at the time of the fork is that Oracle can't do anything about OpenZFS. If they wanted to kill it, tough luck, the CDDL grants non-exclusive free rights to reproduce and modify the code. It also says the developer grants the use of any of their patent claims. Here's the license [opensource.org].

            The licensing problem with ZFS has nothing to do with Oracle's ability to cause problems. It's that the CDDL isn't compatible with the GPL. Which isn't at all a problem when it comes to using it. It's a problem in the sense that code from OpenZFS can't be copied into the kernel (unless relicensed for that use by the copyright holder), and code from GPL programs can't be copied into OpenZFS (unless relicensed for that use by the copyright holder). You're free to distribute both. You're free to load an OpenZFS module into the kernel. Same way you're free to load your nvidia proprietary kernel driver.

            So the CDDL incompatibility is a pain for open source developers who need to be careful about cross-pollination, but end-users trying to pick a file-system? OpenZFS is more mature, it performs better (unless you disable COW with btrfs, but then you don't have COW), it has more features. It's a no-brainer.

            In other words, yes, look at the implications. Then we can finally end the weird stigma against OpenZFS on Linux.

            • You're free to distribute both.

              You are not free to distribute them as an aggregate work.

              You're free to load an OpenZFS module into the kernel.

              True, a user may do this, but that module may not use GPL-only kernel symbols.

      • Rewrite it in rust IMO

        /ducks

      • Sun, not Oracle wrote the original CCDL which is on ZFS.

    • I'm not sure which flavor of ZFS ships with Ubuntu, but it's been rock solid in RAID-Z2 mode on my 8 disk array for the last 7 years. I've had 4 disks fall so far and replacing them was seamless.

    • Ubuntu now ships with it, and there has been zero legal disputes.

      There's always zero legal disputes until there's a legal dispute. In the legal system past performance has zero bearing on future performance. The only thing you can rely on is contractual agreements, and they very much are not in OpenZFS's favour.

      Oracle don't move quickly. Canonical hasn't really been in legally questionable territory for more than 2 years yet. It took longer than that for Oracle to sue Google over Android, and that was a far more juicier target. The reality is that Canonical has one legal

      • by Guspaz ( 556486 )

        While Canonical may or may not have issues for distributing both the Linux kernel and ZFS together, the OpenZFS project itself does not distribute the Linux kernel, and as such should have no issues. And ultimately, even if Canonical were to be sued over ZFS, that would not directly impact the OpenZFS project, and users would simply use a separate repository, as they did before Canonical included it in their main distribution. As such, it's largely irrelevant to the actual use of ZFS on Linux.

  • I believe the primary goal of btrfs was to steal thunder form ZFS because Larry Ellison's ego was insulted. We should, for many reasons, dump all software from Larry's companies especially Java if Larry wins the Java API case.
    • You got that exactly backwards.

      • The primary goal of Btrfs was to provide fs-level snapshots for Linux thus making ZFS irrelevant. Unfortunately, Btrfs turned out to be a poor design that has become horribly complex due to the various attempts to patch it up to work halfway efficiently. No so complex that it is nigh on impossible to prove the code correct, as evidenced by ongoing issues with corruption on power fail or disk full. Among many other problems.

        Fortunately for Linux, it turns out that snapshots are low on the list of things that

        • QLC drives. I've had multiple ext4 partitions with bad super-blocks on a intel QLC drive less than a month old, and less then 1% of the rated write capacity. Btrfs is easier on SSD's especially bad ones. It's also going to have direct zns drive support long before OpenZFS is going to.

        • snapshots are low on the list of things that most users care about in a file system

          I'd argue that's because snapshotting is low on the list of things users understand about in a file system. On topics like this it's not a case of build it and they will come, but more a need for education and marketing. Snapshotting is quite powerful and can offer some serious benefits to end users. The problem is right now it lacks a pretty interface.

          • Whatever the reason, most users don't care about snapshots. For example, do you need snapshots in your phone? It would be groovy, but you probably never thought about that even once, until right now. I agree, users should care about snapshots. They should care about replication. They should have continuous backup that protects against ransomware attacks. But they don't, that's just a fact.

          • "If I had asked people what they wanted, they would have said faster horses." - Apocryphal Henry Ford

            Yeah, the lack of interest in snapshots is largely a user education and UX issue not necessarily a real user issue. Then again it feels like every time this is brought up half of the comments are sharing horror stories of file system corruption. So there are also probably users like me who really want snapshots, but value data integrity above all else.

  • I just use zfs, it works. I have read that btrfs has problems with its raid5/6 equivalent. Zfs, on the other hand, works great.

    • At this point, even RAID 5 is obsolete. If the disk fails, restore from backup. No point trying to mess around with fixing a RAID system, restore and move on.
      • Re:zfs (Score:4, Insightful)

        by thegarbz ( 1787294 ) on Saturday October 24, 2020 @09:46PM (#60644770)

        What a completely ignorant statement. Why would you accept system downtime when instead you could simply swap out a drive and have complete uptime maintained during a rebuild.

        Striped RAID with parity very much is still a necessity for uptime in a high performance system, the exception being if you have virtual servers or mirrored hardware (or the budget for RAID10). There are different solutions for different use cases and striped RAID is still very much in the mix of the ideal solution for many use cases.

        • Striped RAID with parity very much is still a necessity for uptime in a high performance system

          That's only true if you need high uptime and you don't have high traffic. If you only have 10,000 users, sure go for it.

      • RAID 5 is not obsolete. However, it is less common in home environments than it once was. Home users typically use a simple mirror, if they have any storage strategy at all. The vast majority do not, and the vast majority do not keep backups. RAID 5 is still ubiquitous in corporate settings, where professionals are able to run the numbers to find a good balance between risk and cost.

      • No, versioning offers so many more options than backups. They both play a role, but you hope versioning can solve 99% of your issues and rely on backups for the last 1%. There are other options beyond ZFS, but ZFS makes so many things trivially easier. If you want a single-disk solution and cloud backup, fine... but for redundancy in depth it is great to have options.
  • by xonen ( 774419 ) on Saturday October 24, 2020 @08:08PM (#60644550) Journal

    I's looking for a filesystem that can combine various storage media of different types and sizes, while offering redundancy and allowing flexible adding and removing storage units.

    ZFS is not up to this task. Btrfs is, but it's not trivial. And then there are various notes about issues with data integrity, which counters the whole idea of redundancy. In the end, seems the best setup is still a simple raid-1 and in that case the filesystem itself is barely relevant.

    So.. Maybe in another decade. But i do think there's lots of love for btrfs once it's ready for prime time. ZFS is nice and all but not really suited for small-scale DIY@home use and has hardware requirements like identical sized drives per volume, so basically just raid-5 with snapshot feature.

    How do i feel about that all? I don't know i'm pretty agnostic when it comes to feelings about technological developments. Seems patience is the answer.

    • > I's looking for a filesystem that can combine various storage media of different types and sizes

      Drobo seems to have the patents on that and a reliable implementation. So you can get that today but you need to buy Drobo gear to get it.

    • If you want that, consider a stack, where you overlay btrfs on top of md-raid, or even btrfs on LVM2 on md-raid. This gives you some interesting flexibility. Need to reduce a filesystem size, since you are removing a drive, you can reduce btrfs's size (something you cannot do with XFS), then use md-raid to remove the drive. It will take a ton of time for md-raid to recalculate everything, but it is doable, and md-raid is time tested and extremely reliable. Adding media is easy as well.

      On some production

  • by caseih ( 160668 ) on Saturday October 24, 2020 @08:12PM (#60644564)

    Unfortunately it didn't work out so well in the end. I never lost data, and I enjoyed the features of BTRFS, but the file system just got slower and slower as time went on. I tried all the normal tunings, rebalancing, etc. It wasn't a matter of the disk being too full either. The only thing that helped was switching my kernel IO scheduler. Basically any disk I/O (even to swap) would kill performance, both read and write to the BTRFS partition. I added RAM to reduce swapping, and that helped a lot. But it was never satisfactory.

    Somewhere I read that BTRFS was exposing flaws in the firmwares of disks. The intimation being that disk manufacturers should fix their stuff. Which is a fine sentiment, but not what happens in the real world.

    During this time I was also using BTRFS on a SATA solid state disc on my laptop, and it seemed to work fine. Although since I rarely used my laptop, every time I did, it seemed like that was the time it was trying to do a trim operation, which would prevent it from going to sleep.

    I ended up setting up a new workstation with an eMMC solid state module, and also my laptop's SSD started to fail, so I replaced it as well. This time I ended up going with ZFS (care of ZFS on Linux, which uses OpenZFS) on both machines. Ubuntu 20.04 with ZFS on root. I also set up my workstation with Fedora 32 and I converted that one over to ZFS on root. Both work great, although I don't recommend using ZFS on root on Fedora unless you like adventure.

    I'm partial to ZFS because I used it for many years in a production environment back about 15 years ago.

    • What is this "swap"? RAM is cheap now, swapping murders your system, if a process gets out of control then swap just means a bunch of disk thrashing making your system unusable until you fill up RAM and swap and hit the OOM killer.

      I don't recommend using ZFS on root on Fedora unless you like adventure.

      This is the basic problem with ZFS on Linux. I don't pick a filesystem for adventure, I pick a filesystem for the opposite of adventure.

  • trusting data with new filesystems. What can possibly go wrong? I use xfs.

    • By "new" you mean 12 years old right? At that point usually very little can possibly go wrong. Unfortunately btrfs is a clusterf**k, it has plenty of features labelled as stable that none the less continue to cause issues for users.

      Next I assume you're going to tell me ZFS is a "new" filesystem as well simply because it's only included in one Linux distro by default despite the fact that it's 20 years old?

    • XFS. Xtremely dangerous FileSystem. You are one brownout away from disaster

  • by account_deleted ( 4530225 ) on Saturday October 24, 2020 @08:32PM (#60644612)
    Comment removed based on user account deletion
    • by Guspaz ( 556486 )

      ZFS has been the better alternative to btrfs on Linux for quite some time now, including on RHEL/CENTOS/Fedora. There have been packages for it for many years.

  • by CmdrPorno ( 115048 ) on Saturday October 24, 2020 @08:47PM (#60644646)

    Gesundheit!

  • Before they start discussing the merits of various filesystems and licenses.

    • I feel like the two have nothing to do with each other and you should have a backup workflow even if you don't own a PC and do all your work on an iPhone and can't spell filesystem.

      Backups are so underappreciated. I have high hopes that people who tinker with advanced filesystems overlap greatly on a Venn-diagram with people who have well thought out backup procedures.

  • I have a (perhaps paranoid) distrust of everything Oracle, ever since the late 90's and early noughties, when I worked as a pre-sales technician for an Oracle reseller, and saw the clusterfuck that their direct sales team made of just about every customer's licence pool - basically, padding requirements and over-selling licences that were either completely not needed or financially inappropriate for the client, but which increased the profit margin for the sales team.
    Their ERP solutions were also a nightmar

  • I really do. However, I just don't trust it enough to work. I have maintained an OpenZFS NAS for over 5 years. I've had multiple hard drive crashes, and the primary OS drive crash. Each time, I have been able to salvage the RAID without any data loss. Until Btrfs can provide that kind of reliability and ease of use, it isn't going to replace ZFS. For normal harddrive use (non-RAID), Ext4 works well enough. IMO Btrfs's opponent is ultimately ZFS, but it is no where near stable enough to put up a good
  • Is behind the scenes inside my NAS, powering many advances features, and I am very happy with it and grateful for it. But there Synology does all the heavy lifting.

    In my servers, I would not touch it with a 10 metre pole, I'd rather use XFS or JFS.

    In a hobby install, I'd rather toy around with ZFS, as it has more dials to tweak, meaning more fun experimets to make.

    In due time, things will change, and BTRFS will be ready for prime-time. So I reserve the right to change my opinion in the future.

  • I've run ZFS for a long time, and it's worked well for me. Btrfs doesn't seen ti offer any features or advantages that I'm aware of, and isn't remotely as stable or mature, so why would I care about it or switch to it? It's irrelevant because everything it purports to do is already done better. I see people arguing about licenses, but that stuff only matters to activists. The average user doesn't care about if a kernel module is coming from a different package or is compiled in. That stuff is invisible impl

  • by UnknownSoldier ( 67820 ) on Saturday October 24, 2020 @09:57PM (#60644806)

    It would be helpful if the developers of btrfs and ZFS could work together to create a single kernel module, with maximal sharing of "cleanroom" code, that implemented both filesystems... Oracle is itself unwilling to settle these questions with either a GPL or BSD license release of ZFS.

    1. illumos and OpenZFS [wikipedia.org] already solved this "issue" years ago, back in 2013.

    2. Sun's original CDDL was NOT intended to conflict with GPL.

    Fork Yeah! The Rise and Development of illumos [youtube.com]

    3. Eight years ago I said the same thing [slashdot.org] when I quoted Bryan Cantrill in this, now private, video Why You Need ZFS [youtube.com] where he clarified this myth of ZFS and why it "can't" be included in GPL source:

    @5:40 I just want to clarify you comment "It would be illegal to ship"
    @5:45 I think there is a perception issue that we need to tackle.
    @5:55 One point that I would like to make because I think said earlier that I think we have much more in common then that separates us.
    @5:58 One of the most important things we all have in common is we are all open source systems.
    @6:02 And we need to end this self inflicted madness of open source licensing compatibility.
    @6:12 I think that it is a boogey man and we letting it us hold us back.
    @6:19 You say it would be illegal to ship. I say no one has standing
    @6:24 The GPL was never ever designed to counter-act other open source licenses.
    @6:33 That is a complete rewrite of history to believe the GPL was designed to be at war with BSD or with Cuddle.
    @6:39 The GPL was at war with properiety softwware. And thank the GPL and Stallman open source won.
    @6:45 That is the whole point. Open source won.
    @6:49 We are pissing on our own victory parade by not allowing these technologies to flow between systems.

    4. Is this /. editor really that fucking clueless about zfs-fuse [debian.org] that started back in 2006 [wikipedia.org]??? While it hasn't been updated in years it is still possible to use ZFS on Linux. This is NOT the only implementation. [wikipedia.org]

    brts is shit [linuxjournal.com] for many reasons. In the past it's performance was crap. [pgaddict.com]

    ZFS has been around for almost 2 decades (it was created in 2001.) ZFS is debugged, battle tested, and solves every problem brts does.

    Using an unproven, half-baked File System, brts, is pure insanity. But go ahead and play Russian Roulette with your data. We'll wait while you restore your data.

  • When BTRFS seemed usable, (2010?), I used it for my desktop and laptop. It's writable snapshot feature allowed me to make alternate boot environments that made backing out failed Gentoo Linux updates as trivial as a reboot. Plus, I could keep a few ABEs around as another type of backup of the OS.

    To be clear, I never used it with mirrors or any type of RAID-5/6. At the tail end of my use, I experimented with dup-metadata and dup-data, but somehow could not trust it over a second disk, (which neither comput
  • by derinax ( 93566 ) on Saturday October 24, 2020 @10:29PM (#60644892)

    My main concern with BTRFS is Synology's all-in reliance on it, with it being the default filesystem. Notably they do not rely on the more squirrelly volume management parts of it, and just use it on top of bog-standard RAID, but still... If it dies on the vine or stagnates, there are a lot of volumes out there that will need to be rebuilt / migrated.

    • It isn't really a requirement. If one wants, they can move their data off, reformat the filesystem as ext4. One will lose snapshots and a number of added features, but that is always an option. Even if btrfs does stagnate, eventually Synology will move to something else. QNAP is starting to go with ZFS on the high end, and I wouldn't be surprised that ZFS might wind up being an option eventually.

  • Been using BTRFS since it was introduced, and it's gotton considerably better over the years.

    I have no large data stores, or run anything commercial, it's just my desktop. With that said I've been booting into a root btrfs system and can't complain. It's fast enough, cheap on backups, and provides advanced functionality that no other linux-native filesystem has.

  • by theendlessnow ( 516149 ) * on Sunday October 25, 2020 @12:23AM (#60645158)
    Probably the most shocking thing, and I'll emphasize this, is that at least using it on latest Fedora (which sort of sparked the presentation), it has critical stability problems.

    So, from the official Btrfs FAQ:

    1. If you have btrfs filesystems, run the latest kernel.
    2. You should keep and test backups of your data, and be prepared to use them.

    Now, if you're like me, you sometimes take such advice with a grain of salt. But I'd advise you to heed the FAQ.

    Most people are actually not going to "test" the filesystem. In fact, they'll deploy feature of the filesystem and "assume" they work ok. For example, in my testing I wanted to simulate a drive failure on a mirrored (fully, Btrfs users will know what I mean as the are three types of "data" in Btrfs). My Btrfs filesystem was NOT a primary filesystem, it was an afterthought manually mounted fileystem. Yet, when I nuked an underlying block device under a mirrored Btrfs, I was left with an unbootable system. No problem you say... well, my usual tricks were not sufficient (though I didn't spend a day trying to resurrect it)... Anyhow, the main point is that what I did should not have created this scenario. So, word to the wise, be careful.

    Also, there are plenty of other snafus where you can "do things" in btrfs that might not make sense, but it allows it, with diastrous results. As with many pieces of software, Btrfs is complex, and getting more complex, and there's just too much to test, and I'd say that testing right now centers on the "common" utilization. In other words, you can shoot yourself in the foot with "successful" commands in Btrfs.

    Does it remind of early Xfs? Sure. Early ZFS? Sure (no, you probably never saw early ZFS). Even early ext4? Sure. All of these had some pretty major problems early on.

    Btfs is interesting. The problem IMHO is whether or not it is "interesting enough" (?) The "techie" inside of me want to see it succeed. But there's also a part of me that would rather see NILFS2 get greater love vs Btrfs.

    Btrfs does not handle encryption. If you know Btrfs, then you know it would be per file based if ever implemented. Right now, Btrfs is not working on encryption at all, leaving that up to the block manager underneath. IMHO, it would be wise for Btrfs to add this to the roadmap.

    Even with encryption, would that be "enough" to see wide spread adoption? I'm going to say no. What if Ubuntu adopted it as its default filesytem? That might have a better chance of seeing Btrfs "succeed". You have to remember openSUSE has been using it for years combined with their "snapper" tool. Does it work? Yes. Do I use it (I use openSUSE)? No. Have I tried it there? Yes. Is it "safe" there? No. Is it safe in Fedora? No. Since it's the default in openSUSE and Fedora is striving to make it the default, this will make it "great", yes? No.

    IMHO, Btrfs still needs maturing. Remember, that Red Hat pushed it out of its distribution (the Fedora people reminding everyone that they are NOT Red Hat).

    Is it fun to play with? Yes. Does it have truly interesting features that other filesystems (and combos) don't have? Yes. Will it give you a headache? Quite possibly.

    The question is: Is it enough? We'll have to wait and see.
  • Ignored all warnings. Used it on an experimental basis before that. No problems yet. Why I switched ? Generally I don't even majorly use multiple hard-drives for a filesystem, or the features of subvolumes.

    1. Much easier to do consistent backups : I snapshot a filesystem into a read-only location before backing it up. Otherwise, when restoring we get

    2. Saving bad hard-drives : one hard drive ran into increasing bad sectors every day soon after its one year warranty was over. It was 3 TB - I partitioned it

    • Otherwise, when restoring we get ...

      Yeah - this is what we get. Some files in a certain state, and some other files in a different state. Much like the points in the above post.

  • by mennucc1 ( 568756 ) <d9slash@mennucc1.debian.net> on Sunday October 25, 2020 @03:36AM (#60645460) Homepage Journal
    My 2cent . I have been using BTRFS for partitions from 100GB to 1TB, both for OS, user data, and backups. First and foremost: use the latest kernel and the latest btrfs tools.

    One recurrent problem is free space. Assume the partition is 100GB for simplicity. When the use arrives at ~90GB , the filesystem becomes unusable, cannot write any more data. I have experienced this many times. It is a mindnumbing experience. Many people have reported this problem [1] and there are methods to recover [2] but in my experience they are fragile. One possible cause is that BTRFS runs out of space allocated for metadata, but it cannot get more since standard data are unevenly spread around. The main method is rebalance using btrfs balance start /mountpoint -dlimit=3 . The sad truth is that this too can fail.

    Last time I had this problem, I had to:

    1. 1) reboot and run the OS from an external USB key , with latest Ubuntu (so I had latest kernel and tools)
    2. 2)`btrfs check` the ill partition to correct some problems
    3. 3) mount the ill partition
    4. 4) try `btrfs balance start /mountpoint ` with different parameters... it would fail for lack of disk space (!)
    5. 5) move some stuff to other places and delete it (not that easy... it would refuse to even *delete* some files, I had to try many different files)
    6. 6) rinse and repeat above steps

    ... and at a certain point magically something snapped and it found out it had indeed 10GB of unused space!
    Conclusion: not something you want to experience in a production server.

  • by fennec ( 936844 ) on Sunday October 25, 2020 @05:25AM (#60645622)
    I help maintaining a VMWare lab at work (in my spare time...) We have a 18TB SAN that just crashed. After reboot, that was a 3h check that concluded that all disks were fine but the FS was still in "crashed" state. I found out it was a BTRFS and it was corrupted. Support said there was no way to fix it, we can recover data to another disk, delete the RAID and copy data back. So we ordered a USB disk and will have at least 3 days of copy in both ways. This is just not acceptable... don't use BTRFS. Just check the official btrfsck page. it's a joke: https://btrfs.wiki.kernel.org/... [kernel.org] I warned several time mgt that we had no backup and that this SAN was not reliable. Now I can say "I told you so".
  • by Jezral ( 449476 ) <mail@tinodidriksen.com> on Sunday October 25, 2020 @08:08AM (#60645878) Homepage

    I want mutable snapshots, transparent compression, and deduplication. I used to use ZFS because it support those features, but ZFS gobbles RAM and is not usable for external USB HDDs - it would just die in an unrecoverable way every ~3 months. And ZFS snapshots are not writable - there is no way to delete a file from all snapshots and actually free up the space, because the underlying snapshot is immutable.

    Switched to btrfs around 7 years ago, and it's great. Writable snapshots, transparent tuneable compression, on-demand deduplication. And with compress-force, the performance quirks with large files such as databases are mostly mitigated, because changes only need to COW the 128 KiB block it is modifying.

    There are certainly still features missing from btrfs. Recovering RAID1 is abysmal - you get 1 and only 1 chance to replace a failed device, and if you do it wrong you need to recreate the array. And parts of btrfs is not aware of its own COW - e.g. defrag will unshare blocks.

    But even so, I use btrfs on workstations, production servers, development servers, backup servers, etc, and it's been excellent. Just remember what the workload is and set the compress-force algorithm accordingly - though these days zstd is a really good default for everything.

    • by rl117 ( 110595 )

      "Writable snapshots" are a contradiction of terms, and the fact that Btrfs lets you do this is not a positive. It's actually a demonstration of how poorly thought out its design is. ZFS clearly separates snapshots and clones so that snapshots are always immutable and clones are writable (unless set readonly).

      Selectively deleting content from a snapshot to "save space" is a strong indicator that you're not using the snapshot facility properly. Maybe you could be using finer-grained snapshots to split up d

If you think the system is working, ask someone who's waiting for a prompt.

Working...