Oracle Engineer Talks of ZFS File System Possibly Still Being Upstreamed On Linux (phoronix.com) 131
New submitter fstack writes: Senior software architect Mark Maybee who has been working at Oracle/Sun since '98 says maybe we "could" still see ZFS be a first-class upstream Linux file-system. He spoke at the annual OpenZFS Developer Summit about how Oracle's focus has shifted to the cloud and how they have reduced investment in Solaris. He admits that Linux rules the cloud. Among the Oracle engineer's hopes is that ZFS needs to become a "first class citizen in Linux," and to do so Oracle should port their ZFS code to Oracle Linux and then upstream the file-system to the Linux kernel, which would involve relicensing the ZFS code.
Having it NOT be in upstream is more flexible (Score:5, Insightful)
One nice thing about ZFS not being in upstream is that it is currently maintained and updated separate from the Linux kernel.
Now, it would be nice to relicense ZFS under GPL so that it can be included in the kernel. But this should wait until the port is a bit more mature. Right now development is very active on ZFS and we have new versions coming out every few weeks; having to coordinate this with kernel releases will complicate things.
All this said, relicensing ZFS would definitely help Oracle redeem themselves a bit. After mercilessly slaughtering Sun after acquiring them, they have a long way to go to get from the "evil" side back to the forces of good.
Re: (Score:2)
Re: (Score:2)
Why does Oracle often like maintenance costs? This seem awkward to me.
Re: (Score:2)
Re: (Score:3)
Re:Having it NOT be in upstream is more flexible (Score:5, Interesting)
Funny, I thought ZFS was very mature by now.
It's very mature, on Solaris. Linux has a different ABI to the storage layer, and different requirements on how filesystems are supposed to behave. So it's not so much a port as a re-implementation.
Re: (Score:2)
ZFS is mature, but has some curious omissions.
For example, as far as I know it can't use a disk span within an RAID set. I.e. You can't mirror a 4TB drive, say, with 2x2TB drives spanned to present as a 4TB device. (Which is the kind of thing that would make a small home NAS be able to really easily re-use small disks.) If I'm not wrong then I can only assume that's too niche a case to be interesting in the enterprise environment.
Re: (Score:2)
You can't mirror a 4TB drive, say, with 2x2TB drives spanned to present as a 4TB device.
It doesn't do it natively, but you can hardware RAID the 2x2TB drives and it will treat it like a single 4TB device. It's not best practice, because ZFS uses the SMART counters to warn you of impending drive failure, and hardware RAID masks those, but you can do it.
Re: (Score:2)
Do any existing RAID systems allow you to do that in one step?
You could do in in Linux w/ ZoL by using md to stripe or concatenate the two smaller devices & feed the md block device to ZFS. It should work, but I could see ZFS making some ungood decisions based on hiding the underlying hardware from it. Dunno about performance either.
Re: (Score:2)
Re: Having it NOT be in upstream is more flexible (Score:2)
Re: (Score:2)
Says who? I've done that. A bunch of examples if you google for it.
Re: (Score:2)
I googled extensively at the time I was setting up my home NAS (~4 years ago). If it's possible without doing the spanning using something outside ZFS (e.g. hw RAID as others have suggested) I'd be really interested, as from time to time I grow the storage and have to partition the disks in interesting ways.
Re: (Score:2)
Re: (Score:2)
You realize you can use a file as a vdev, right? This means if you want to use two 1TB drives as "one" device, you just create a pool using 2x 1TB drives, and then in your new set up you just refer to the file that was created.
zpool create test /tank/test/zpool /dev/blah/fake-2TB-device
or
zpool create test
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You can't mirror a 4TB drive, say, with 2x2TB drives spanned to present as a 4TB device.
Err yes you can. You just really shouldn't given you're doubling the failure rate of one of the vdevs, and also hosing the flexibility of ZFS which would at best benefit from using all three drives in a single RAIDZ and upgrading them on failure at which point your pool automatically grows to the lowest sized drive.
You can do whatever you want. ZFS won't stop you from turning your hardware into an maintainable mess. The fact that you CAN do this in ZFS I see it as a huge downside. The only end result here w
Re: (Score:2)
Re: (Score:2)
Likewise, does not the maintainer of, say, the TTY subsystem (just a random pick...) make active changes *between* release cycles, submitting their LAG to the various RCs?
Not to RCs. As I understand it the kernel is on a three month cycle, one month merge window and roughly two months of weekly RCs that are only supposed to be bug fixes. Otherwise you might get an undiplomatic response from Mr. Torvalds. Worse yet, many distros ship kernels much older than that and despite having "proper channels" bugs often go directly upstream with a resolution of "we fixed that two years ago, update... sigh, waste of time". So if you're not really ready for production use, being in the ke
Re: (Score:3, Insightful)
Oracle is evil ... period. There is no going back.
Re: Having it NOT be in upstream is more flexible (Score:2)
Oracle is evil ... period. There is no going back
More like âoecompletely ambivalentâ, not really the same as âoeevilâ.
At least thatâ(TM)s what I was going to say until I remembered the click-through mess they put in front of downloading the jre and jdk. Pure malice.
Re: (Score:2, Insightful)
I don't believe this is Oracle's better nature or whatever; ZFS has to transition from Solaris to Linux because Solaris is dead.
It's really that simple. If Oracle can gin up a little excitement and maybe score some kudos then great, why not? But ultimately this has to happen or the official Oracle developed ZFS will die with its only official platform.
Re: (Score:2)
But this is Oracle we're talking about. I doubt they would GPL something because in their minds they'd lose control of it and allow the competition to exploit their code. After all, that's what Oracle has done itself to competitors like Red Hat. Aside from that, assuming they did GPL it, then it would immediately fork b
Drawback of separate developmend (Score:3)
One nice thing about ZFS not being in upstream is that it is currently maintained and updated separate from the Linux kernel.
And that's actually a huge problem that makes it a major obstacle to its upstream adoption.
Mainly due to code duplication.
ZFS (and its competitor BTRFS) is peculiar, because it's not just a filesystem. It's a whole integrated stack that includes a filesystem layer on the top, but also a volume management and replication layer underneath (ZFS and BTRFS on their own a the equivalent of a full EXT4 + LVM + MDADM stack).
That is a necessity, due to some features in these : e.g. the checksuming going on in the f
Re: (Score:2)
Don't worry about it. One day, systemd will manage it all.
Re: (Score:2)
they have a long way to go to get from the "evil" side back to the forces of good.
What do you mean "back"? I can't ever remember a time when Oracle wasn't obnoxious.
Re: (Score:2)
No. I've been running ZoL for almost a decade. It's constantly being bitten by kernel API changes and the kernel devs will break ZFS without a second thought and it happens all the time.
It's been a little while since we've been three months without a working ZFS-head build on Fedora (or other newish kernels) but there's still nothing stopping it from happening.
Dual-licensing to something GPL-compatible would allow parts of the SPL/ZFS stack to be brought in-kernel, even if most of it stayed outside, at le
Re: (Score:2)
Re: (Score:3)
OpenZFS and Oracle ZFS have diverged a bit. The on-disk pool contains a version number which identifies with certainty whether you can import it on a given implementation, so there's at least no chance of mistaken mis-importing & data loss from that. They're interoperable for pools that aren't upgraded past the highest pool version supported in the final CDDL release of Oracle ZFS. Beyond that, they won't work.
Oracle ZFS has since added file-level encryption. The encryption and the on-disk structure
Re: (Score:2)
If Oracle licensed ZFS as GPL in addition to the current CDDL, (nearly) everyone could use it. CDDL is incompatible with GPL (intentionally so), but CDDL is NOT incompatible with BSD or most other non-GPL open licenses.
The BSD's used Sun's own ZFS code for years before OpenZFS was founded. I ran my NAS on FreeBSD with it for about three years until ZoL stabilized enough that I jumped back to Gentoo. CDDL isn't copyleft, so the BSD's can use it without any problem. If they stripped the CDDL option and re
Re: (Score:2)
Nice post, "+1 Informative". But "-1 Wrong" about Oracle ever being non-evil. Ever.
https://www.youtube.com/watch?... [youtube.com]
You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle." -- Bryan Cantrill https://www.youtube.com/watch?... [youtube.com]
Good: it's about time. (Score:2)
Careful there (Score:2)
ZFS wants to live in a fairly specific configuration. It wants a bunch of drives, a bunch of memory, and not much competition for system resources. It's really a NAS filesystem, which is why there are no recovery utilities for it. If your filesystem takes a dump, you're SOL, hope you have a backup.
You can run it on a single drive on a desktop machine, but you are incurring a bunch of overhead and not getting the benefits of a properly set up ZFS configuration.
Re:Careful there (Score:5, Insightful)
ZFS wants to live in a fairly specific configuration. It wants a bunch of drives, a bunch of memory, and not much competition for system resources.
Except for the part where it works with 2 drives, on a system with 4GB of RAM and under constant heavy load just fine.
Re: (Score:2)
Precisely, a bunch of drives, or a RAID, starts at two drives.
Re: (Score:2)
Being pedantic here, but you are wrong, and there are circumstances where this matters.
You cam make a RAID1 array with one drive plus a failed (non-existent) drive. Hence the minimum is actually 1 drive, not two.
Re: (Score:2)
RAID, as defined in the original paper, involves data striping and striping cannot be implemented with less than 2 drives.
If you desire redundancy, RAID requires a minimum of 3 drives. A mirrored drive pair is not RAID, it is just mirroring.
Re: (Score:2)
Depends on what you mean by a drive. I have a horrible hard drive which was declared almost in its grave by SMART long ago. I made 2 partitions, run "software RAID1" across the 2 partitions , and store one final backup on it.
If it dies, nothing is lost.
Re: (Score:2)
Precisely, a bunch of drives, or a RAID, starts at two drives.
Actually you're more than happy to run it on 1 drive as well. There's nothing "precise" about the GP's assertion that ZFS wants a fairly specific configuration.
Re: (Score:2)
Generally in computers it is best to go from "only 1 device" directly to "n devices" and not to waste time special-casing 2 devices, 3 devices, 4 devices.
Config? (Score:2)
Are you doing Z+1? Or just striping with an L2ARC, which is nearly pointless? What's the areal density of the drives? 'Cause if you are using anything above 2TB the odds of getting uncorrectable errors on both drives becomes non-trivial.
At this point you are better off using XFS with a really good backup strategy.
Re: (Score:2)
So they say. Don't you find it odd that a drive can't possibly correct for errors but a filesystem can?
I wonder if drive vendors acknowledge that 100% of their high capacity drives are incapable of functioning without uncorrectable errors. Perhaps they should implement ZFS internally and all problems would be solved.
Re: (Score:2)
So they say. Don't you find it odd that a drive can't possibly correct for errors but a filesystem can?
That's because the filesystem can just write to a different spot on the device, but if a specific spot on the physical device goes bad it's bad. In fact, almost all drives automatically error correct, you can see the stats through utils like "smartctl". A drive generally has +10%-+20% of advertised capacity, and exports a virtual mapping of the drive. As sectors start to show signs of failing, the address is transparently mapped to some of this "extra" space and things continue as normal. It's only a drive-
Re: (Score:2)
the filesystem can just write to a different spot on the device, but if a specific spot on the physical device goes bad it's bad.
That's not true at all. Modern HDDs can remap sectors, to other tracks if necessary.
Errors (Score:2)
A drive can correct for errors if a block is bad. Problem is, as areal densities increase, the odds of data changing randomly increases. This is mainly due to cosmic rays or other natural sources of radiation, but there can be other factors. The drive doesn't know anything about the data itself, it only knows if it can read a block or not, and that's really the way you want it. You want the drive to be structure and data agnostic. Otherwise you would need a specific drive for a specific file system, which
Scrub of death (Score:2)
Please remind me not to let you administer my filesystems.
http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/ [jrs-s.net]
https://forums.freenas.org/index.php?threads/ecc-vs-non-ecc-ram-and-zfs.15449/ [freenas.org]
https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=26303271#p26303271 [arstechnica.com]
https://www.csparks.com/ZFS%20Without%20Tears.html [csparks.com]
Re: (Score:2)
Why? All the articles you link to describe one failure mode which is not only theoretical but all can be avoided by simply not scrubbing the pool. No one is forcing you to do that, and you can run ZFS just as happily as any other file system with non-ECC RAM and still get some of the benefits including the filesystem potentially alerting you to failing RAM rather than silently screwing your system as it would with any other filesystem.
Re: (Score:2)
Correction: I have sat down and read all your links in detail. All of the claims that ZFS scrubbing will destroy your pool on non-ECC RAM is actually garbage which doesn't take into account the actual failure mechanism of the RAM or the response of the scrub which is to leave data untouched if an unfixable error occurs. So scrub away.
GP was right, there is no special hardware requirements for ZFS and you should have no problem letting him administer your sensitive data.
Re: (Score:2)
Whilst I run it on bunches of drives, I also use it on single drives when I want to know all data is correct. Backups are great, but silent data corruption, which gets copied to backup, can mess everything up.
Re: (Score:2)
I use ZFS on a NAS with a bunch of drives, but I also use it on a hosted VM with under 1GB of RAM on a single (virtual) drive and a few local VMs. The benefits that I'm apparently not getting include:
Re: (Score:2)
ZFS wants to live in a fairly specific configuration.
ZFS wants nothing, but many of its advanced features require certain configuration. You want to run it with 12 drives, 32GB of RAM on a simple file server, go for it, it really shines. You want to run it on a single drive on a system with 2GB of RAM, go for it, there's no downsides there vs any other file system.
It's really a NAS filesystem, which is why there are no recovery utilities for it.
There's no recovery utilities because they are rarely needed. The single most common configuration involves redundancy. ZFS's own tools include those required to fix zdb errors and recover data on a
And once it's in the kernel, Oracle will sue... (Score:2)
*cough*Java*cough*
Btrfs (Score:2)
I played with zfs-fuse on KDE Neon a couple years ago after reading from its acolytes that it was "more advanced" and "better" than EXT4 or Btrfs. It wasn't. A lot of it is missing in the fuse rendition.
I switched to Btrfs. I have three 750Gb HD's in my laptop. I use one as a receiver of @ and @home backup snapshots. I've configured the other two as a 2 HD pool and then as a RAID1, and then back to a pool again. In 2 1/2 years of using Btrfs I've never had a single hiccup with it.
There are some exce
Re: (Score:2)
ZFS fuse is not ZFS on Linux. Not sure why you'd pass judgement on ZFS having only used it years ago with the fuse version. If you want a real test, try the latest ZFS on Linux releases. They are kernel modules not fuse drivers.
I have run BtrFS for about 5 years now, and I must say it works well on my Laptop with SSD. However on my desktop with spinning disk, it completely falls over. It started out pretty fast for the first few years, but now it's horrible. The slightest disk I/O can freeze my system for
Re: (Score:2)
Quick question for you: do you have quota's enabled? Updating qgroups takes an enormous amount of time, I had the same symptoms on my laptop on a 1T drive, and turning of quotas and removing qgroups solved it.
Re: (Score:2)
Oh you had me excited there for a second. But no, alas, quotas and qgroups are not enabled, as near as I can tell.
Re: (Score:2)
Re: (Score:2)
I played with zfs-fuse on KDE Neon a couple years ago after reading from its acolytes that it was "more advanced" and "better" than EXT4 or Btrfs.
They should have mentioned no such thing. ZFS-Fuse was a shitty work around to a licensing issue that many people are still arguing may not actually be real. It has effectively been undeveloped for many years and also as a fuse module was not capable of implementing the entire ZFS stack as required.
Switching to btrfs from zfs-fuse has nothing to do with ZFS itself. You just switched from the worst option to the second best. btrfs is still preferable to ext4 in my opinion, but it doesn't hold a candle to ZFS
New to ZFS (Score:4, Informative)
Just as this article popped up I was assembling a JBOD array (twelve 4TB drives) for a new data center project, my first in quite a while. Also self funded so I don't have to defer to anyone in decisions.
When I started I did a bit of reading trying to decide what RAID hardware to get. To make a long story short once I read the architecture of ZFS and several somewhat-polemic-but-well-reasoned blog entries I decided that is what I wanted.
Only two months ago I had an aged Dell RAID array let me down. I have no idea what actually happened, but it appears some error crept in one of the drives and it got faithfully spread across the array and there was just no recovering it. If I didn't have good backups that would have been about 12 years of the company's IP up in smoke. I just thought I'd share.
So I ended up as a prime candidate (with new found distrust for hardware RAID) to be a new ZFS-as-my-main-storage user. I've just recently learned stuff that was well established five years ago [pthree.org] and I can't understand why doesn't everybody do it this way.
Wow. snapshots? I can do routine low-cost snapshots? Data compression? Sane volume management? (I consider LVM to the the crazy aunt in the attic. Part of the family but ...) Old Solaris hands are probably rolling their eyes but this is like mana from heaven to me.
Given the plethora of benefits I am sure the incentive is high enough to keep ZFS on Linux going onward. ZFS root file system would be nice but I am more than willing to work around that now.
Re: (Score:2)
> Only two months ago I had an aged Dell RAID array let me down. I have no idea what actually happened, but it appears some error crept in one of the drives and it got faithfully spread across the array and there was just no recovering it. If I didn't have good backups that would have been about 12 years of the company's IP up in smoke. I just thought I'd share.
It may have been the RAID write hole ?
See Page 17 [illumos.org]
Re: (Score:2)
It may have been the RAID write hole ?
I was wondering if that is what it was, but with the stress of having a major file server down I just couldn't justify the hours it would take to a) learn how to diagnose it and then b) do an analysis. That system had only the one VM left on it so I was just happy enough to take the latest VM image and put it on another hypervisor.
One drive was making ugly noises so maybe (probably) a head crash. The confident product theory of hardware RAID is that shouldn't have mattered the remaining good drive(s) s
Re: (Score:2)
You may also want to take a look at btrfs. It sounds like a match for the feature set that interests you, and it is already available on Linux.
Re: (Score:2)
ZFS is also "already available" on Linux and has been for several years. By comparison btrfs is still in diapers, and currently support has been dropped by all major linux vendors save for SUSE, and whatever the fuck Oracal is doing in the Linux world right now (trying to appear relevant).
ZFS is more mature and in far more active development.
Re: (Score:2)
Re: (Score:2)
That would be insignificant if anyone else other than SUSE was throwing anything behind btrfs. btrfs seems to be losing favour ever since Ubuntu decided to change their roadmap from potentially including to btrfs as a default to declaring it outright experimental with the current roadmap favouring zfs as the future default.
By support being dropped I don't mean technical support or official support, I mean that the major vendors (other than SUSE) are no longer supporting the idea of btrfs becoming the next g
Re: (Score:2)
Re: (Score:2)
btrfs is still the future
For whom?
Dismissing the two biggest players in the industry as not cutting edge, despite the fact that they aren't abandoning it out of conservatism (Ubuntu isn't anyway) doesn't paint it as "the future".
Especially giving that ZFS is further in development, more actively developed and has a more advanced roadmap, I question if btrfs is the same kind of "future" as Clean Coal or a slightly more efficient car, etc. If btrfs is the future, you're going to have a hard time convincing people of it.
Re: (Score:2)
Re: (Score:2)
Ubuntu.
But nice attempting to change the focus of the discussion. Remember the word I used over and over again? "Future" Now please scroll back to the start and read that entire thread over again.
Antergos native ZFS for the root filesystem (Score:2)
Unfortunately, it's not quite there. Very close though.
https://antergos.com/wiki/miscellaneous/zfs-under-antergos/ [antergos.com]
Re: (Score:2)
Note "HOPES" (Score:2)
He hopes that... but he has no decision power, i bet. Maybe he's on the next firing list.
This is Oracle that we're talking about, it's more likely they'll let you license ZFS for a couple thousand per month...
Re: (Score:2)
The version in BSD is a older version derived from when Solaris was open-source, in 2007. It is independently maintained and a part of OpenZFS. In fact the ZFS stacks in IllumOS (a fork of open-source Solaris), FreeBSD, Linux and OS/X share a lot of code and are compatible, in the sense that if you create a ZFS filesystem on one of these OSes, it will work on the others.
OpenZFS has made enormous progress. I have been using it on my FreeBSD, Linux and OS X (macOS) boxes for over 3 years now.
Re: (Score:2)
Not the best fit since it's schizophrenic (Score:3)
> The problem with ZFS on Linux is that some aspects of it are redundant with the kernel.
Probably ALL aspects of it. Linux already has a raid implementation in-kernel. It already has filesystems. It already has multiple volume managers, which handle whichever type of snapshots you prefer. It already has IO schedulers. ZFS, or rather something that looks just like it, can be implemented as a few configuration lines for pre-existing Linux components.
Because Linux normally lets you use your choice of fil
Re: (Score:2)
That last bit is important. If ZFS doesn't have a way to put its hands into the RAID, it can't attempt to rebuild known corrupted data. Until mdadm and hardware RAID controllers allow you to issue a "read, but try to give a different result" operation you can't do this. (Said operation would attempt to use parity even on a healthy array in an attempt to give a diffe
Re: (Score:2)
It's called scrubbing, and RAID has always done it (Score:2)
> Until mdadm and hardware RAID controllers allow you to issue a "read, but try to give a different result" operation you can't do this. (Said operation would attempt to use parity even on a healthy array in an attempt to give a different block content by pretending a disk is dead).
So until the late 1980s? That's called RAID scrubbing and I believe it was mentioned toward the end of the original RAID paper in 1987 or 1988. Certainly 10 years ago I had a "mdadm check" command in my crontab. I know this
Re: (Score:2)
Unfortunately ZFS does not allow you to detect corruption until it is too late to do anything about other than throw the file away.
That is you write something to disk with checksums which will allow you to detect that it is corrupt. However unless you read what you have written back immediately you have no idea whether what you have just written actually made it to the storage device intact.
If you want to make sure what you have just written to the storage device made it there intact, and is still intact wh
Re: (Score:2)
Oh it can be "checked" by RAID controllers. The question is, how do you know which copy is correct? In the case of a RAID-1, if the 2 disks don't have identical data, which do you assume is the right data? ZFS has checksums to figure out which is right. MDADM doesn't.
And if there is an API to allow you to ask for data from a specific disk rather than letting the RAID driver pick one, I'm interested.
Heard of RAID levels 2 through 6? (Score:2)
> ZFS has checksums to figure out which is right. MDADM doesn't.
You have no idea how RAID works, do you? Neither through the mdadm UI or any other.
RAID level 2 uses Hamming error correction codes.
Levels 3 through 5 use checksums much like ZFS does. Level 6 uses two independent sets of checksums, so even if you lose half your checksums, you're still okay.
>. if there is an API to allow you to ask for data from a specific disk rather than letting the RAID driver pick one, I'm interested.
An API to r
Re:Not the best fit since it's schizophrenic (Score:5, Insightful)
> Because Linux normally lets you use your choice of file system on top of your choice of volume manager, on top of whichever RAID implementation you choose, with your choice of IO scheduling options, ZFS isn't exactly the best fit. ZFS mashes all those different things into one big blob. That's not really how Linux is designed.
Criticizing ZFS for "rampant layering violation" has been discussed to death before [archive.org]
"Dumb" API's, such as the ones implemented in Linux, have a STRICT layered approach like this:
* Volume Management
* File Management
* Block (RAID)
Problems start when each layer needs information at the layer above it. This is epitomized with the design flaw in hardware RAID via the write-hole [oracle.com]. Link to English version [google.com]
In contradistinction ZFS takes a holistic, unified approach:
* Volument Management <--> File Management <--> Block
e.g.
The original RAIDZ implementation was written in 599 lines of code in vdev_raidz.c -- less code equals less bugs.
https://github.com/illumos/ill... [github.com]
> That's the same issue as systemd
No it doesn't. You are comparing apples to oranges. ZFS works because it intentionally "Flattened the stack" -- Yes, this runs counter to the layered Unix approach -- but sometimes that is NOT the best design decision.
Meanwhile Oracle keeps flailing about with Btrfs.
ZFS vs BTRFS (Score:3)
In contradistinction ZFS takes a holistic, unified approach:
* Volument Management <--> File Management <--> Block
{...}
ZFS works because it intentionally "Flattened the stack" -- Yes, this runs counter to the layered Unix approach
The problem is that ZFS implement this by rolling everything in the same "rampant layering violation" package.
It is one single "flattened stack".
On the other hand, BTRFS shares as much code as possible with in-kernel facilities (it leverages "device mapper" routines that are used also by lvm, mdadm, mdraid, etc. it leverages in-kernel compression routine that are also used by the kernel loader and the crypto module, etc.)
It's not as much a "rampant layering violation" as a wrapper against layer facilities a
Btrfs works? No it most certainly does not. (Score:2)
Just ask SUSE [suse.com]:
Feature set (Score:2)
Just ask SUSE [suse.com]:
Just learn to read the docs [kernel.org] if you insist on having esoteric options turned.
In 2017, RAID56 are still marked incomplete.
Modern filesystem are a huge pile of diverse features, some are fully stable and used in production (e.g.: RAID0 and 1) other are still in development (e.g.: RAID56).
Complain that BTRFS is completely crap because RAID5/6 isn't fully functionnal and production ready, is like complaining that the venerable XFS is utter crap because its copy-on-write and snapshotting doesn't work yet.
(and BTW
Which docs? (Score:2)
How about these?
Friends don't let friends use BtrFS for OLTP. [pgaddict.com]
Re: (Score:2, Interesting)
ZFS mashes all those different things into one big blob. That's not really how Linux is designed.
That's because Linux isn't designed, it's grown organically in a hodgepodge fashion. Some people think this is a good thing. Others do not.
A weblog post by Jeff Bonwich, one of the creators of ZFS, from ten years ago**:
Andrew Morton has famously called ZFS a "rampant layering violation" because it combines the functionality of a filesystem, volume manager, and RAID controller. I suppose it depends what the meaning of the word violate is. While designing ZFS we observed that the standard layering of the storage stack induces a surprising amount of unnecessary complexity and duplicated logic. We found that by refactoring the problem a bit -- that is, changing where the boundaries are between layers -- we could make the whole thing much simpler.
https://blogs.oracle.com/bonwick/rampant-layering-violation
He gives a reasonable answer as to why glomming all that together has its advantages. Good intro slide deck:
https://wiki.illumos.org/download/attachments/1146951/zfs_last.pdf
Note that "ZFS" is actually made of of three layers: the SPA
Re: (Score:2)
> Because Linux normally lets you use your choice of file system on top of your choice of volume manager,
The problem is: btrfs, exfat, ext3, ext4, fat, jfs, reisderfs, and xfs ALL SUCK -- they all propagated write errors [danluu.com]
FS / read / write
btrfs.. | prop prop prop
exfat.. | prop prop ignore
ext3... | prop prop ignore
ext4... | prop prop ignore
fat.... | prop prop ignore
jfs.... | prop ignore ignore
reiserfs | prop prop ignore
xfs.... | prop prop ignore
Re: (Score:2)
Probably ALL aspects of it.
That's like saying ext4 is redundant because ext2 exists. Just because parts of ZFS serve the same function as parts in all places of the Linux I/O stack, doesn't make it comparable or redundant.
Actually the only word I would use is "better" given the feature list.
Re: (Score:2)
As you may know, RedHat has deprecated BTRFS in RHEL7.4 [redhat.com] whereas many distributions like Ubuntu fully support ZFS [ubuntu.com].
I woud say that the status of BTRFS is worse [kernel.org] than that of OpenZFS on Linux. See also here [ixsystems.com] for an interesting article.
Re: (Score:2)
I would say you are wrong.
That RH has not retained qualified Btrfs programmers is their business decision and has little to nothing to do with Btrfs or its usability.
https://www.itwire.com/open-sa... [itwire.com]
KDE Neon User Edition has zfs-fuse and a version of OpenZFS in its repository. I've played with the fuse version and was unimpressed.
After I tried zfs-fuse I tried Btrfs. I've been using it without a single fault or problem for 2 1/2 years.
Re: (Score:2)
Whatever the reason, btrfs is not supported in production on RHEL. It has never been, it's always been in "preview" and will soon be out of the picture completely.
It's been going on for years so I would agree with the above that OpenZFS would have a brighter future.
Re: (Score:2)
I use btrfs on openSuSE and it works great.
Re: Mainstream in FreeBSD... (Score:2)
Right, btrfs has proven itself as a stable filesystem which is not prone to corrupting itself on single or mirrored drive configurations.
Except it isn't, and does so regularly.
The practical implications of zfs over btrfs far outweigh the architectural encapsulation of zfs. This limitation primarily relates to arc, a situation which has plauged freebsd, illumos, and even Solaris since the changeover from sparc to x86. It is drastically better now across the board, particularly on linux, where the native memo
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)