Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source Oracle Linux

What Linus Torvalds Gets Wrong About ZFS (arstechnica.com) 279

Ars Technica recently ran a rebuttal by author, podcaster, coder, and "mercenary sysadmin" Jim Salter to some comments Linus Torvalds made last week about ZFS.

While it's reasonable for Torvalds to oppose integrating the CDDL-licensed ZFS into the kernel, Salter argues, he believes Torvalds' characterization of the filesystem was "inaccurate and damaging."
Torvalds dips into his own impressions of ZFS itself, both as a project and a filesystem. This is where things go badly off the rails, as Torvalds states, "Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel... [the] benchmarks I've seen do not make ZFS look all that great. And as far as I can tell, it has no real maintenance behind it any more..."

This jaw-dropping statement makes me wonder whether Torvalds has ever actually used or seriously investigated ZFS. Keep in mind, he's not merely making this statement about ZFS now, he's making it about ZFS for the last 15 years -- and is relegating everything from atomic snapshots to rapid replication to on-disk compression to per-block checksumming to automatic data repair and more to the status of "just buzzwords."

[The 2,300-word article goes on to describe ZFS features like per-block checksumming, automatic data repair, rapid replication and atomic snapshots -- as well as "performance wins" including its Adaptive Replacement caching algorithm and its inline compression (which allows datasets to be live-compressed with algorithms.]

The TL;DR here is that it's not really accurate to make blanket statements about ZFS performance, absent a very particular, well-understood workload to measure that performance on. But more importantly, quibbling about the fastest possible benchmark rather loses the main point of ZFS. This filesystem is meant to provide an eminently scalable filesystem that's extremely resistant to data loss; those are points Torvalds notably never so much as touches on....

Meanwhile, OpenZFS is actively consumed, developed, and in some cases commercially supported by organizations ranging from the Lawrence Livermore National Laboratory (where OpenZFS is the underpinning of some of the world's largest supercomputers) through Datto, Delphix, Joyent, ixSystems, Proxmox, Canonical, and more...

It's possible to not have a personal need for ZFS. But to write it off as "more of a buzzword than anything else" seems to expose massive ignorance on the subject... Torvalds' status within the Linux community grants his words an impact that can be entirely out of proportion to Torvalds' own knowledge of a given topic -- and this was clearly one of those topics.

This discussion has been archived. No new comments can be posted.

What Linus Torvalds Gets Wrong About ZFS

Comments Filter:
  • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Sunday January 19, 2020 @03:58AM (#59634240) Homepage

    I don't mean like you've got a desktop and you use it because you heard it's cool. I mean you're actually meaningfully using "atomic snapshots to rapid replication to on-disk compression to per-block checksumming to automatic data repair and more" in a way that makes existing other solutions obsolete because it's just so much better.

    • by darkain ( 749283 ) on Sunday January 19, 2020 @04:13AM (#59634258) Homepage

      I'm one of the many users of it, and specifically for those reasons. Data integrity is critical to what I do. RAID controller failures and disk drive controller failures have caused a number of issues for me in the past two decades. ZFS virtually eliminates these issues, or at least detects them early enough to resolve them by replacing defective hardware before data corruption happens.

      Snapshotting is absolutely amazing Think git commit history of files, but for everything! Except that instead of every single commit, it just has only certain point in time snapshots (advantages and disadvantages to both approaches). Samba now supports ZFS snapshots and exports shares as shadow volume copies that Windows can directly read. Someone accidentally saves junk to a file on a network share? They can easily restore it themselves, or have IT restore the file for them.

      Replication? This is our primary backup strategy. With snapshots, we push them across the internet to remote data centers to other live servers. If anything happens, all of our network shares just need to point to a different destination, and we're back in business as if nothing happened. We also use this to replicate OS ISOs across all locations, we have one central repository that is replicated out everywhere. And ZFS is smart enough to do incremental transfers, so only new content is pushed over the wire.

      Transparent compression is amazing. Some of our data sets are getting a 2-to-1 compression ratio. This means less disk I/O, so faster access to data, as well as less overall storage required for large objects.

      Also, ZFS offers transparent on-disk encryption, which for some of my work is a regulatory compliance thing that must be enabled. Applications don't have to worry about it, as the file system does it for them.

      There are countless other features that we're using across multiple different projects where ZFS has replaced entire suites of utilities, both hardware and software, in a single, concise, and unified package.

      • Thanks. These all seem like useful features.

        It seems like its biggest impact will be realized in businesses wanting to build a custom SAN. Is ZFS licensed in a way that you're not worried about Oracle's litigious reputation, or are people hoping to fly under the radar?

        • by _merlin ( 160982 ) on Sunday January 19, 2020 @08:03AM (#59634538) Homepage Journal

          CDDL is a viral copyleft license that prohibits adding additional restrictions on redistribution. In that way, it's a lot like GPL. It varies in details, for example it leans on patents as a way to go after infringers while GPL relies on copyright alone. The issue is that CDDL and GPL contain different restrictions, and both prohibit adding additional restrictions. Therefore, you can't distribute a work that combines CDDL and GPL code. (GPL is an "end of the line" license - it's only "compatible" with other licenses if the other license is less restrictive and allows code to be redistributed under the terms of the GPL. The same is true of CDDL.)

          The ZFS-on-Linux kernel module is distributed under the terms of the CDDL, being derived from the OpenSolaris ZFS code. The uncertainty is over whether it is constitutes a derived work of the Linux kernel, which is distributed under the terms of the GPL. Some people argue that by using Linux kernel interfaces and being loaded into the Linux kernel, it is a derived work of the Linux kernel. (It will be interesting to see what effect the result of Google vs Oracle on whether APIs qualify for copyright protection has on this argument.)

          If ZFS-on-Linux is in fact a derived work of the Linux kernel, it is unlawful to distribute. Combining CDDL and ZFS code in itself is not unlawful - it's the act of distributing the combined work that's a license violation. If this is the case, anyone distributing or using ZFS-on-Linux will be exposed to two kinds of litigation: Linux kernel contributors, Oracle, and other OpenZFS contributors will be able to sue anyone distributing ZFS-on-Linux for copyright infringement; and anyone using ZFS-on-Linux lose the patent license provided by CDDL, and potentially be required to pay royalties to parties holding patents applicable to ZFS.

      • And how does ZFS interact, for example, with typical databases, which already have their own (and very efficient) solutions of these things? Data integrity is already solved here, in a way that is optimal for these applications. I assume that if you're doing things that hadn't already solved these issues before ZFS, ZFS might now be doing a good-enough job for you, but those for whom these issues were critical already had this solved.
      • Also, ZFS offers transparent on-disk encryption, which for some of my work is a regulatory compliance thing that must be enabled. Applications don't have to worry about it, as the file system does it for them.

        That sounds about the least interesting part: Linux already offers transparent encryption for any FS by encrypting the block device.

      • You mentioned pushing file shares to another system as backup. It's not 100% Clea exactly what you're doing, but it sounds like you may have a dangerous situation that isn't nearly as safe as you thought.

        If the primary writes a copy of itself to the backup, you're going to be hosed when you get ransomware. Most ransomware encrypts any shares it has access to, so the main server will encrypt it's backup if it can.

        Because encryption changes every block, that's going to use up all your snapshot space if the dr

        • Comment removed based on user account deletion
        • If a client ends up encrypting your backup share, you can easily fix that just by rolling back to a previous snapshot. It's true you might run out of space in the meanwhile, but these aren't LVM snapshots -- the pool running out of space won't corrupt your snapshots. Quotas and reservations can limit how much storage a given client uses too, so this doesn't even have to affect other clients.

          (I'm not arguing against pull backups in general, just pointing out that this particular situation isn't nearly as dan

      • by guruevi ( 827432 ) on Sunday January 19, 2020 @12:03PM (#59634986)

        I've been a user since Sun Solaris, I agree, as far as a filesystem is concerned, it is very stable. However, over the last decade, there hasn't really been much development. Sure some minor improvements were made, but BP rewrite has been on the roadmap since before Sun got taken over by Oracle and performance, especially with RAIDZx and its resource requirements (especially RAM but also CPU cycles) really has been atrocious.

        I had to invest in 512GB RAM and 2TB worth of enterprise SSD to get 40Gbps out of an 80+ disk array. Another major feature promised but missing is HA and clustering, dedup hasn't ever been feasible and the license is a problem, at one point even Apple considered it for their systems but couldn't trust the license. My primary problem with ZFS is that it has required me to do forklift upgrades and over-provision all the time just to maintain performance, which is due to BP rewrite being missing.

        So I agree with Torvalds on the technical side, even though I've been an advocate for ZFS and hope nobody gets sued by Oracle. Although in my case, recently, both Nexenta and iXsystems were beat by Dell Isilon and StorageCraft both in performance, feature set and price (this is enterprise-level, HA stuff) for a 5 year high growth (500TB and 30%+/y growth) budget. Ceph actually beats ZFS in my case, but both support and implementation costs are still high and CephFS isn't stable enough for my liking. I think ZFS is on its way out in favor of other open source file systems like Ceph once they mature.

    • by broknstrngz ( 1616893 ) on Sunday January 19, 2020 @04:21AM (#59634276)

      At my previous job we built all of our data clusters on top of ZFS. Why? Because of being in a highly regulated industry (for security) we ran everything on bare metal in our own datacentres. That and budget constraints meant we only had a finite/small number of machines we could allocate to this. We were able to reach mind blowing performance out of our ElasticSearch and MongoDB clusters by fine tuning ZFS's ARC and L2ARC (our workloads were very read heavy). In fact, ZFS's ARC got us out of trouble more than once. In one instance we ran into some pretty crappy performance issue with MongoDB's own cache flushing logic, ZFS's ARC, despite being completely oblivious to what it was storing, performed better/as expected from the hardware until the MongoDB bug was fixed.

      We also ran extensive performance tests for our workloads for months prior to going live (we needed a guaranteed filesystem latency for this platform). We tested FreeBSD and OpenZFS (circa 0.6.8). There was no comparison, the FreeBSD implementation was much more predictable with a much lower latency standard deviation.

      So there, we did all that for a fraction of what the equivalent cloud infrastructure would have cost, and we got all the data integrity perks to go with it.
       

    • by arglebargle_xiv ( 2212710 ) on Sunday January 19, 2020 @04:31AM (#59634288)
      His points are actually valid, the reason for the disagreement is that he's taking a 30,000ft view while the person responding is picking out a range of trivia that disagree with the 30,000ft view. It's a bit like someone looking out of an airplane at 30,000ft and saying "the land below us is green" and someone else saying "no it's not, there's some road over there which is black, and a river, that's sort of dirty brown, and a city with white(ish) buildings, ...". Yes, technically person #2 is correct, but as a general summary of the situation person #1 is correct.
      • by cas2000 ( 148703 )

        His point about the CDDL license is completely valid.

        His comments about the technical features and benefits of ZFS are complete nonsense.

      • by Solandri ( 704621 ) on Sunday January 19, 2020 @07:56AM (#59634524)

        It's a bit like someone looking out of an airplane at 30,000ft and saying "the land below us is green" and someone else saying "no it's not,

        As someone who actually uses ZFS (which apparently he hasn't), it's more like someone who's never actually looked out the window of an airplane at 30,000 ft saying "the land below us is green because land has grass and trees on it", while anyone who looks out the window can tell you the plane is flying over the ocean (which incidentally covers more than twice the surface area as land). His technical criticisms of ZFS are for the most part completely off base.

        His caution about the licensing issue is spot on. Though from what I understand the incompatibility is due to Linux's GPL being more restrictive than the CDDL, not the other way around as many are assuming (since ZFS is now owned by Oracle).

      • by Calibax ( 151875 )

        No, Linus' 30,000ft view is that he's flying over a ZFS wasteland with no redeeming features, and the person responding is saying that actually it's mostly green arable land down there.

        Two completely different views, not nitpicking as you suggest.

    • by Indy1 ( 99447 ) on Sunday January 19, 2020 @04:38AM (#59634294)

      I run several file servers in the 300-500 TB range. All using ZFS.

      My big one was on an open Solaris variant until the OS drive ate itself. I replaced the OS drive, installed Debian 10, and was able to re-import the ZFS pools, and was back up and running in just a few hours. No data lost or corrupted.

      Sorry Linus - but you're talking out your ass on this one.

      • Did any of these servers have remote snapshots? What happened when you had 500Mb/s of deltas but your network could only transfer 300Mb/s?
        • The issue of deltas exceeding your network transfer rates is not trivial and highlights the need to use a replication method which isn't bound to the file system itself. Commercial NAS solutions handle this through asynchronous replication, caching deltas locally and then compressing the data before loading it onto the wire. But even then if your deltas are growing faster than you can replicate them the only real solutions are to throttle the writes or increase bandwidth.
    • by Halo1 ( 136547 )

      This container-based webhosting provider [antagonist.nl] does, to the extent that they even dug into the source code to figure out a performance issue (there's an English summary near the end of the article).

    • by Tapewolf ( 1639955 ) on Sunday January 19, 2020 @07:57AM (#59634526)

      It's clearly very important to the Lawrence Livermore National Laboratory because they're doing most of the work on the Linux version.

      Myself, I've lost data at rest before, some source code of mine got turned into random garbage at some point back in the DOS days, and of course that's what got into the backups, as I discovered a year or so later when I tried to revisit that project. So the ZFS checksum and repair system is of great interest to me as I now have large amounts of video project, audio multitrack and artwork multilayers that can't be replaced, and I'd like some assurance that what's being backed up is still correct.

    • I do. Snapshots are great for backups - keeping the last one on the server lets me to only transfer the differences - no need to do a full backup more than once, it also lets me to rollback the last snapshot almost instantly if I need to.
      Data repair is useful when disks start acting up - I managed to recover a RAID! where both drives had lots of bad sectoors - .didn't need to restore from backups.

      I can't say that zfs is the best there is (to do that I would have to try everything), but its snapshots are bet

      • Maybe snapshots work for backups, but I have yet to understand what I'm gaining over simply using ext4 and rsync in that case. Sure there is some extra overhead to rsync but it's a backup so you're running it at a low time anyway.
        • Also if you only want to store the difference, you use rdiff-backup. No need for a big elaborate filesystem.
        • by Pentium100 ( 1240090 ) on Sunday January 19, 2020 @10:21AM (#59634738)

          What is the file you want to backup is in use, say, it's a disk of a virtual machine. Backing it up with rsync will result in a file that has a "newer" end than beginning. In other words - corrupt. With a snapshot you get a file that is all o the same "age", I think this is called a "crash-consistent" backup.
          Also, rsync has to read the file on both ends to calculate the differences.This means that the backup performance is limited by reads, especially if you are backing up a few servers with, say, 20TB of data on them (but probably 100GB of changes since last backup), raync will take a long time to read the 20TB.

          OTOH, if I keep the last snapshot on the vm host, when I create a new snapshot, zfs already knows the difference, so the backup speed is limited by the network or the write performance of the backup server, since only the changes are read and transferred. I can then delete the older snapshot from my vm host.

          Also, since the backup server also uses zfs, I use snapshots to keep older backups, not just the last one, and all the snapshots are accessible. I can mount any snapshot and get the file I need very quickly, no need to assemble the data from one full backup and 20 incrementals. It's already done. I can then delete an old snapshot afer its retention period expires and do not need to do a full backup more than once.

          • You create a snapshot in vmware before you do an rsync and you're golden. There are many commercial solutions that work this way.
            • by rl117 ( 110595 )

              You still haven't got around the requirement to read all the data at both ends. rsync is, by its very nature, going to be orders of magnitude slower than ZFS snapshot send/recv. I've used both. rsync is slow, really, really slow. And it's not atomic. If the collection of files and directories you are transferring changes during the transfer, it's not going to be self-consistent. ZFS snapshots are replicating a point-in-time snapshots of the entire dataset safely and with minimal overhead.

            • vmware is a good product, but it also is expensive. kvm + zfs is free.

          • rdiff-backup has flexibility in full versus incremental.
    • I use it every day, and take advantage of all of the features you've mentioned.

      I'm a homelabber, meaning I like to run server-class hardware at home and build out test environments where I can learn. Busy teaching myself Ansible at the moment on the lab, but I digress.

      The main storage in my lab is a 48TB raw ZFS array attached to a Dell R710 that I picked up dirt cheap. It has a single Westmere CPU and 72GB of RAM that I also picked up really cheap (the second socket has a bent pin so I can't double up the

  • Linus makes the mistake of negatively criticizing ZFS but does not say what to use instead.

    • Simple reason: (Score:5, Insightful)

      by BAReFO0t ( 6240524 ) on Sunday January 19, 2020 @05:06AM (#59634328)

      There is nothing else. Nothing can even remotely substitute what ZFS has to offer.
      Doesn't mean it doesn't have its own problems.

      Reality is mostly not a dichotomy, dear DemocratRepublican voters. ;)

      • There‘s btrfs (Score:2, Interesting)

        by bavarian ( 59962 )

        btrfs may not be as battle-proven as zfs is, but for most of the use case of zfs, it’s the more modern alternative that’s actively developed within the Linux kernel community.

        In case you’re worried about its maturity: You can get enterprise support for it from SUSE and Oracle (!)

        • by ninjaz ( 1202 )

          In case you’re worried about its maturity: You can get enterprise support for it from SUSE and Oracle (!)

          In my experience, btrfs is has been more of an attractive nuisance than anything. While SUSE will field support cases about it in SLES, the level of support is along the lines of "try btrfs.check --repair", and when that destroys the filesystem entirely, to recommend restoring from backup.

          ZFS on Solaris was a solid production-grade filesystem. btrfs sounds like it offers similar functionality on paper, but has not been production quality on SLES 15. File checksumming is nice, but it's hard to really get

        • by _merlin ( 160982 )

          Do you realise where btrfs came from? It was originally developed by Oracle because they wanted an alternative to ZFS. When they acquired Sun (and by extension ZFS), they stopped caring about btrfs, and it's stagnated ever since.

        • Btrfs is not production ready.
        • by rl117 ( 110595 )

          Stop. You can repeat it until you're blue in the face, but Btrfs is still not production ready. SuSE don't offer support for anything but a minimal subset of its total features. It's still slow, it still suffers from unbalancing, and it still suffers from dataloss. It is not a replacement for ZFS or even a real competitor. To compete with ZFS, it would have to be as performant, as resilient, and as functional. It's none of these things.

          In all the years of its "active development", it's yet to challenge

      • The standard Linux storage stack, based around multi-device (MD), can do all the same stuff that ZFS can do. The difference is that the standard system has distinct layers, ZFS is all-in-one.

        The standard system uses the "lvm" commands to manage volumes and snapshots. It uses mdadm to manage duplicating or checksumming blocks across disks, aka raid. LVM can actually do raid too, but mdadm is the preferred tool. In the standard Linux model, the filesystem later is provided by - any filesystem you want, diffe

        • by rl117 ( 110595 )

          Internally, ZFS has all the same layers, plus some additional ones. Seriously, go and look at its design.

          mdraid and LVM do not match the integrity checking and self-repair which ZFS provides. When you can resilver a degraded array like ZFS does, then I'll believe you.

  • by jean-guy69 ( 445459 ) on Sunday January 19, 2020 @04:56AM (#59634312)

    ... NIH syndrome !

    • by Kjella ( 173770 )

      Well, ZFS is more or less the NIH solution... Linux wants to have layers, if you want compression it should be compressionFS layer and encryption an encryptFS layer and you should use file system agnostic tools like mdadm and rsync to do RAID and replication. Then along comes ZFS and wants to do everything like one monolithic solution. Now because of the license the ideological discussion never really took off, ZFS didn't have much choice if it wanted feature parity on all the supported platforms but if it

      • Re:Looks like... (Score:4, Interesting)

        by Tapewolf ( 1639955 ) on Sunday January 19, 2020 @08:10AM (#59634544)

        Well, ZFS is more or less the NIH solution... Linux wants to have layers, if you want compression it should be compressionFS layer and encryption an encryptFS layer and you should use file system agnostic tools like mdadm and rsync to do RAID and replication. Then along comes ZFS and wants to do everything like one monolithic solution. Now because of the license the ideological discussion never really took off, ZFS didn't have much choice if it wanted feature parity on all the supported platforms but if it had been GPL licensed it'd set off a bunch of sysv vs systemd style debates.

        It's nothing so petty. The layering system gets in the way of what ZFS and BTRFS are doing, which is why BTRFS also has to reinvent its own wheels as well, e.g. it has its own RAID implementation. The bitrot protection scheme on both systems works by checksumming the data and metadata, which requires it do understand the filesystem disk format. If the checksum mismatches the data on one disk, it then has to pull another copy from the mirror, which it can determine is correct because the checksum will match. That requires interacting with the RAID layer in ways which a generalised RAID system won't have.

        Basically, more advanced filesystems have more advanced requirements that Linux' layering system can't provide at this point.

      • Well, ZFS is more or less the NIH solution... Linux wants to have layers, if you want compression it should be compressionFS layer and encryption an encryptFS layer and you should use file system agnostic tools like mdadm and rsync to do RAID and replication. Then along comes ZFS and wants to do everything like one monolithic solution. Now because of the license the ideological discussion never really took off, ZFS didn't have much choice if it wanted feature parity on all the supported platforms but if it had been GPL licensed it'd set off a bunch of sysv vs systemd style debates.

        And those are the guys who embraced systemd!

      • That kind of reminds me of what systemd did.
  • by BAReFO0t ( 6240524 ) on Sunday January 19, 2020 @05:03AM (#59634324)

    This whole drama Americans have around Torvalds saying things like this, strikes me as distinctly American.

    See, over here, we have no problem with negativity. To us, you seem obsessed with positivity in an unnatural way. So to you we might look obsessed with negativity. For us it's simply not the end of the world, if somebody is negative.
    Analogous to our vs your reaction to tits. (Seeing them on large billboards, is nothing unusual around here.)

    Over here, you simply speak your mind. Even if frank and direct. Even if it turns out to be wrong, later on, that is nothing special. We're humans.
    But to Americans, that's kinda the perfect storm. He was negative, AND wrong. *gasp* :)

    It's just a different culture. Yes, in your culture it's a WTF moment. I won't devalue your customs.
    In ours, and Torvalds', we just tell him he's wrong, and why, and move on. Without that big drama around it that everybody hates.
    And you can't expect him to adhere to your customs of all possible ones in the world.

    So relax. Everything's alright. :)

    And yes, he might bark back if you tell him he's wrong. But that won't mean it isn't appreciated and normal or that he won't think about it.

    It's an (northern?) European thing. Works the same in Germany too.
    You would understand, if you would live in a land with as little sun (!!) and religious doctrines and as much alcohol as where Torvalds is from. :)

    • by ReneR ( 1057034 )
      +1
    • by gweihir ( 88907 )

      Pretty much this. Also, few Europeans feel the need for grand-standing, while in the US it seems to be kind of expected. That is how tiny things get inflated to huge sized and actually important things get overlooked.

    • Who is "we"? Not me. While I do classify as a European and a German.

      Your pointing to "America" and crying "drama", "obsession" and "tits" on the other hand, although this is just a civilized technical discussion about the properties and merits of ZFS and how they relate to a public statement from the maintainer of the world's most important operating system's kernel, strikes me as distinctly German.

      • by ilguido ( 1704434 ) on Sunday January 19, 2020 @07:11AM (#59634472)

        this is just a civilized technical discussion about the properties and merits of ZFS and how they relate to a public statement from the maintainer of the world's most important operating system's kernel

        No, this is cherry-picking, because Linus Torvalds further explained his points in the following comments. He makes distinction between ZFS and OpenZFS, something that Salter never mentioned:

        I'm talking about small details like the fact that Oracle owns the copyrights, but turned things closed-source, so the "other" ZFS project is a fork of an old code base.

        If you are talking about ZFS, you're talking about the Oracle version. Do you think it has a lot of development going on? I don't know.

        And if you're talking about OpenZFS, then yes, there's clearly maintenance there, but it has all the questions about what happens if Oracle ever decides - again - that "copyright" means something different than anybody else thinks it means.

    • by dcw3 ( 649211 )

      As an American of German decent, who lived in Germany for six years, I completely disagree with your assessment of the cultural differences...other than the tits.

    • by nomadic ( 141991 )

      Why is Torvalds' bluntness and criticism something to be accepted, but his critic here's isn't?

      In any event, the thing is northern Europe is an outlier on this and most cultures have developed codes for criticism that are not blunt. I think when dealing with a multinational community you don't cling to your own provincial cultural norms if they are an outlier, especially when you are, like Torvalds, living in a country where the code is different. I certainly wouldn't move to Finland and not modify my appro

      • Why is Torvalds' bluntness and criticism something to be accepted, but his critic here's isn't?

        Funny, isn't it? "Cultural differences" is a common defense, but sadly, never for certain cultures.

        The hierarchy is also funny. European must be better than American. But if it were a clash between, say, Middle Eastern and European, then Middle Eastern must be better than European. Oh, unless the Middle Eastern is a certain small bit of Middle Eastern, in which case the whole world must be better than than that Middle Eastern ...

    • Europe is a big place; I don't think you can generalize culture and communication style like that., between Scandinavia, Spain, (Western) Turkey, and Poland. What part of Europe did you have ik mind?

  • for all its mixed together architecture, for stability, reliability, and security different layers and features should be small and separated, like RAID, logical volumes, FS, etc. and not everything clobbered together in one big thing.
    • by rl117 ( 110595 ) <rleigh.codelibre@net> on Sunday January 19, 2020 @09:22AM (#59634624) Homepage

      Go and look at the details of how ZFS is actually implemented. You'll quickly find out that internally it's composed of many different layers. More than Linux has with RAID+LVM+filesystem, in fact. It's not "clobbered together", it's a replacement for the whole storage stack.

      datasets are filesystems
      ZVOLs are logical volumes
      VDEVs RAID sets

      It's different, yes, but it's even more finely layered, and those layers provide features which the standard storage stack can not provide.

      Look into the SPA (Storage Pool Allocator), the Meta-Object and Object layers, the ZPL and ZAP layers, and the several intervening layers, and you might change you mind about it. The design is very well done, and no facilities provided by Linux come anywhere close to matching it. Not even Btrfs.

    • My question is what commercial vendors of enterprise storage products are building them around ZFS.

      My sense was it was probably most useful if you ran a bare metal server and wanted it to deliver native unix filesystems, but a lot less useful than conventional SAN systems for heterogeneous storage environments.

  • I Get It... (Score:4, Interesting)

    by ytene ( 4376651 ) on Sunday January 19, 2020 @05:34AM (#59634366)
    Linus Torvalds said some things that Jim Salter doesn't like. Jim is upset, so his response to go to print and try and argue that Torvalds is wrong.

    Newsflash: none of that helps.

    If Jim wants to produce an efficient, effective response to Torvalds, then the ONLY way to do so is to take each of the elements of Torvalds' arguments and factually rebut them, item by item.

    For example, the OP provides the following quote, apparently from Torvalds: "And as far as I can tell, it has no real maintenance behind it any more..."

    OK, to respond to this, Jim Salter needs to go to the ZFS Git repository and pull some analysis:-

    1. How many versions/patches released in the last 12 months/2 years?
    2. How many individual contributors in the last 12 months/2 years?
    3. How many individual commits in the last 12 months/2 years?
    4. How many operating systems have added support for ZFS in the last 12 months/2 years?


    Follow the same pattern and principles for every single concern that Torvalds raises and take that to print. *Then* you can say that Salter made a reasonable response.

    So far, I've read the Ars article and this piece - and in both cases I've come away with the distinct impression that Salter is whiny and pathetic and needs to grow up or go home. I'm sure he's extremely nice in person, but the way he has gone about to responding to Linus, on a subject he clearly feels passionately about, is all wrong.

    The correct way to respond to someone who is perpetuating myths, inaccuracies or outright falsehoods is to provide verifiable fact in response, not just complain about the bits you don't like.

    What we've had so far is, sorry to say, pathetic.
    • by gweihir ( 88907 )

      Indeed. Some people seem to confuse engineering with religion. In Engineering you actually can deliver facts if you are right and the other side is not.

    • Re:I Get It... (Score:5, Informative)

      by Calibax ( 151875 ) on Sunday January 19, 2020 @07:46AM (#59634514)

      Pathetic, you say? Apparently you haven't read the full article, because if you had you would know that Jim Salter went into some detail about the number of commits. Not that commit counts mean a great deal in a product as mature as ZFS that is used by many large and user-critical sites because of it is so resistant to data corruption, but there have been, and continue to be, a substantial number. And there are regular releases both to support updated hardware and new features.

      Your comment about how many OSes have added ZFS support in the last two years makes little sense. OpenZFS has been available on commonly used OSes for a lot longer than 2 years - Linux, MacOS, FreeBSD, Windows, and several OSes with smaller user bases (OmniOS, SmartOS, Illumos, DilOS, TrueOS, etc.)

      As a happy user of ZFS for the last 7 years, you would have to pry it away from my cold, dead hands. It reduces my workload, but more importantly, gives me peace of mind that petabytes of data is very, very unlikely to disappear due to a data center disaster or be corrupted beyond recovery by hardware failure.

      In short, read Jim's full article. It might change your mind about his response being pathetic.

      Linus has an outsize loudspeaker - perhaps he should be a little more careful about how he uses his greatly amplified voice to badmouth an excellent product and state categorically that it should not be used. He (and some other insiders in Linux kernel development) definitely seem to something against ZFS - much more than just a licensing issue. I don't understand it.

      • by ninjaz ( 1202 )

        Linus has an outsize loudspeaker - perhaps he should be a little more careful about how he uses his greatly amplified voice to badmouth an excellent product and state categorically that it should not be used. He (and some other insiders in Linux kernel development) definitely seem to something against ZFS - much more than just a licensing issue. I don't understand it.

        On the other hand, after Linus' clear comments of "don't use it" alongside dismissing its overall value, it would appear difficult to make the case that he was contributing in any way to any type of infringement. I much prefer for Linus to err on the side of not overvaluing the worth of something with a dodgy license. Rather than criticizing Linus, perhaps that energy would be better spent lobbying for a usable license for ZFS.

        For the typical enterprise use case, ZFS was nice on Solaris to avoid paying Ve

        • by rl117 ( 110595 )

          The licence isn't "dodgy", it's a perfectly valid copyleft based upon the MPL. The GPL is incompatible with it. So what? It's still free software.

          I don't like the self-centred sense of entitlement which assumes that everything has to be done so satisfy the GPL and Linux developers' wishes. The free software world is not wholly centred around their requirements. It's been adopted by several other open source operating systems without fuss. It's only Linux where there such a song and dance about it, and

    • by nomadic ( 141991 )

      Doesn't Torvalds have the same obligation?

  • by aepervius ( 535155 ) on Sunday January 19, 2020 @06:04AM (#59634410)

    The way I read linux post there is 80% of the psot about concern about a litigious company and licensing. Then there is only a throw away sentence at the end about usability. How that suddenly become a rebuttal of 2300 words about usability and zero word about licensing kinda miss the point...

    • by cas2000 ( 148703 ) on Sunday January 19, 2020 @06:29AM (#59634444)

      How?

      Because Jim Salter doesn't dispute Linus Torvald's comments about licensing at all. He stated that several times in his article, and he was correct to do so - Torvalds is absolutely right about the licensing issue. The CDDL and GPL are incompatible, and the only entity who can fix that is Oracle because only Oracle have the right to change the CDDL license on ZFS.

      He disputes Linus's dismissal of ZFS's features as clueless nonsense. Because it is.

      You'd know this if you actually read the article, instead of nit-picked your way through it.

  • Over 90% of Linus' post had to do with whether or not ZFS should be integrated into Linux' kernel or not, focusing mainly on the litigious nature of Oracle, etc. Given the many suits that Oracle has saddled the industry with, his concern may be valid. I don't know how 'free and clear' the legal position of ZFS is at this point. The unsubstantiated allegation re: maintenance was thrown in as some sort of Parthian shot at ZFS while Linus was already galloping off to his next snarky kernel denial.

    That said, th

  • Never go full retard.

    Linus has lost a lot of people's respect over this. It's a really pathetic NIH attitude. The license issue is valid, but the lack of respect shown to ZFS is really where he loses, and loses bigtime. Linux really has nothing equivalent, and that probably eats him up inside, and he's acting like a little man-child over it.

    Lately, I've had to ditch Ubuntu's ZoL (ZFS on Linux) due to a nasty ZoL data corruption bug, and go back to the more-stable FreeBSD implementation of ZFS. FreeBS
    • ext4 + rsync + rdiff-backup.. as far as I read it, people are using ZFS mostly for snapshot backups. There is your equivalent. I think what you meant to say is, Linux doesn't have an equivalent that *doesn't seem as cool* as ZFS.
      • by rl117 ( 110595 )

        Snapshots are only the start of ZFS' features. People are using ZFS not because it's "cool", but because it provides features which are completely unmatched by any Linux-native storage solutions. It's a tool, like any other, but it's unique in its capabilities. Until Linux can provide a replacement with equivalent capabilities, it's going to continue to be highly valued.

        You've mentioned rsync several times in this and other threads. Don't you understand that it's orders of magnitude slower and much less

  • by higuita ( 129722 ) on Sunday January 19, 2020 @09:40AM (#59634650) Homepage

    i agree that zfs is full of cool features, but those are also buzwords! While are interesting to some, they are useless to others and all those features have a hidden cost. a ZFS with deduplication and compression on a desktop will mosty kill the desktop

    >filesystem that's extremely resistant to data loss

    hey, right!
    All FS can use RAID, but zfs use their own internal implementation (with their own set of limitations). Auto-repair is limited and checksum is good to know that one file is bad, but will not recover it. The reality is that is that zfs fails just like other filesystems, with the a bigger problem, as it is so complex, it is harder to repair/recover than filesystems like ext4 or xfs. O already lost one zfs with dedupe and compression, all attempts failed to remount it, even with dev support

  • What does ZFS do that can't be accomplished with any filesystem and rsync or rdiff-backup? Sure there is *some* extra overhead to rsync but generally you run backups at a time where overhead isn't a problem. I find the time I gain by not having to install and configure and migrate to ZFS everywhere makes up for it alone. I have to say I tried ZFS and I wasn't impressed. It seems to make things more complicated without really solving a lot of problems that need to be solved.
  • (summary) ZFS Borg Fanbois dispute criticism.
  • "author, podcaster, coder, and "mercenary sysadmin"

    IOW, almost everybody.

    • Whenever I see "coder," I always wonder, does this mean that before becoming a writer they were a low level medical insurance bureaucrat?

      Certainly "mercenary sysadmin" tends to mean, "I'm not a sysadmin but I get stuck administering the network printer." At least, when it isn't a shield used to deflect criticisms of having not read the manual.

  • by QuietLagoon ( 813062 ) on Sunday January 19, 2020 @11:30AM (#59634926)
    Mr Torvalds was correct in his concern about the licensing of ZFS for use in the kernel. Whether or not he is legally correct doesn't matter, he has the ability and obligation to allow into the kernel only the software he feels is appropriate to include in the kernel. That aside, Mr Torvalds was wrong in his gratuitous swipes at the technology and performance of ZFS. He should have stopped with the licensing concerns, as his childish swipes at the technology and performance lessened the gravitas of his statement.
  • by Lady Galadriel ( 4942909 ) on Sunday January 19, 2020 @04:00PM (#59635632)
    My own take on the issue is that it is better for some projects, including OpenZFS, to be outside of the Linux kernel. For example, by being outside the kernel development, bug fixes can be implemented in OpenZFS without the user updating the kernel. And now the OpenZFS GitHub is being shared with FreeBSD's OpenZFS development. Shared code when possible, with the intent to encourage other OpenZFS implementations to use that same GitHub instance. One of the biggest reasons the OpenZFS project was created 7 years ago.

    Another take on that, is there are more OSes that use OpenZFS than Linux. For example, FreeBSD / FreeNAS. There are some weirdly new implementations, OSv, (that's not MacOS), which uses OpenZFS as a critical component, and ZFS on Windows. This last one I am not certain it's going to succeed, but it's getting pretty far along.

    As for me, I use OpenZFS on my NAS, (FreeNAS), and all my home Linux computers, (desktop, media server, laptop). It's for the features, not the performance. There are studies that show you can get a bit more performance out of something else. Which something else depends on your workload, (so XFS, EXT4, or BTRFS). Since my workload varies, it's not important to me.


    Before I moved to OpenZFS at home, I used BTRFS. Never lost data, (as far as I knew), but it just never stabilized. Even today, 6 years after abandoning BTRFS, it's still not where USERS need it to be. Having it tied to the kernel, when most distros don't follow the cutting edge, (or even bleeding edge), means that users of those distros won't get the BTRFS bug fixes easily. Let alone features. The old EXT2/3/4 were not too bad, because you could ignore the newer one until it was supported everywhere and have good reputation of reliability. Thus, I am ignoring BTRFS, (like RedHat has for the last 2 years), until it's wide spread used.

    To be clear, BTRFS solved 1 of my problems, alternate boot environments. Basically I could snapshot my running image, create new grub entry for the snapshot and boot off it. Then perform updates. If the updates went south, I simpled rebooted to the prior boot environment. This worked well, and neither EXT2/3/4 or XFS, (or most other FSes), supported this feature. And don't drag out LVM snapshots. Those need to be considered temporary and are required to be removed or rolled back in a reasonably amount of time.

    OpenZFS is simply more stable that BTRFS, and more feature rich. Like the compression algorythms. You can change it on the fly and and new data will use your new choice. BTRFS has some choices, but last I looked, (and to be fair that was a while ago), their were problems. One feature that Sun built into OpenZFS, (and Oracle did NOT build into BTRFS), is that pool & file system attributes are both stored in the pool, (or dataset), and easy manipulated. Most of BTRFS' attributes are mount time options. Meaning if you need to change one, you may very well have to un-mount / mount, (or even reboot), to make the change effective.


    Final thoughts:

    Now it would be nice if OpenZFS could be redistributed with Linux distros, (other that Ubuntu). During an update, if one or other, (kernel or OpenZFS), was updated, no problem. Today, us OpenZFS on Linux users do have a bit more effort to make it work, but it's worth it. Better OpenZFS than the alternatives. If it became a violation to use OpenZFS on Linux, I'd seriously consider migration to FreeBSD. And not just because of OpenZFS. Many Linux distros are starting to go off into wonderland, implementing poorly thought out software, (which ends in D). Or forcing their brand of desktop on you, (looking at you Gnome 3!).

Avoid strange women and temporary variables.

Working...