Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Open Source Ubuntu Linux

Ubuntu 16.04 LTS To Have Official Support For ZFS File System (dustinkirkland.com) 191

LichtSpektren writes: Ubuntu developer Dustin Kirkland has posted on his blog that Canonical plans to officially support the ZFS file system for the next Ubuntu LTS release, 16.04 "Xenial Xerus." The file system, which originates in Solaris UNIX, is renowned for its feature set (Kirkland touts "snapshots, copy-on-write cloning, continuous integrity checking against data corruption, automatic repair, efficient data compression") and its stability. "You'll find zfs.ko automatically built and installed on your Ubuntu systems. No more DKMS-built modules!" N.B. ext4 will still be the default file system due to the unresolved licensing conflict between Linux's GPLv2 and ZFS's CDDL.
This discussion has been archived. No new comments can be posted.

Ubuntu 16.04 LTS To Have Official Support For ZFS File System

Comments Filter:
  • by Anonymous Coward on Thursday February 18, 2016 @08:12AM (#51533853)
    All file systems are approximately the same for most day to day users. I would be interested in knowing which is fastest at read/writes.
    • Large files brtfs or xfs.
      For millions of small files...ext4

      • The main drawback with zfs is, it does not have a repairing fsck and never will have one. The koolaid you are supposed to drink is that raid will fix any corruption, so if anything ever does go wrong, and that would include bugs, random memory bit flips, multiple disk errors (lightning storm anyone?) and any number of other hazards that defeat raid recovery, zfs is just screwed and won't even attempt to get back the data that is most probably still sitting there, mostly intact.

        If you need snapshots and remo

        • Um, what?

          zfs scrub

          Just because it isn't called fsck doesn't mean it doesn't have one.

          • Um, what?

            zfs scrub

            Just because it isn't called fsck doesn't mean it doesn't have one.

            Zfs scrub is just a raid repair, it does not understand the structure of the filesystem and therefore is incapable of repairing inconsistencies, or detecting any inconsistency that does not show up as a raid checksum failure. Zfs scrub is definitely not a repairing fsck, and it is beyond me why zfs boosters like to lie about that, or fool themselves.

            • Zfs scrub is just a raid repair

              Let's call it online block-level filesystem integrity checking and repair using a redundant copy if it is available. Simply saying it's "just a raid repair" understates what it actually does. With scrubbing, you can detect and fix silent data corruption, which is something you cannot do with traditional fsck.

              it does not understand the structure of the filesystem and therefore is incapable of repairing inconsistencies

              You will have to be more specific about what you mean. You are correct that it does not repair metadata inconsistencies, but that is because those are handled in a different way. The #1 thing fsck does

              • Can you run into an obscure zfs bug that causes you to unrecoverably lose your entire pool? Yes, of course, which is why you need good backups in every case.

                So obviously you understand that in some cases of corruption, Ext4 can recover useful data while with Zfs your only option is to restore from backup. Now try to explain to any Ext4 user who has successfully rescued data (and there are many, including me) why they should give up this safety net in the name of some psychobabble about how raid repair is just as good as fsck repair, if only you had proper backups. Oops.

                • You cannot recover from bad ext4 superblocks with fsck, so how is this any different from the equally unlikely scenario of a zfs bug resulting in an unrecoverable zpool? You can recover data from ext4 using low-level tools, yes, but I have no reason to believe you can't also do this with zfs if you are comfortable with the underlying filesystem structure and know where to look. Most people just don't bother because restoring from backup is easier.

                  why they should give up this safety net in the name of some psychobabble about how raid repair is just as good as fsck repair,

                  You missed it. Scrubbing is better than fsck, for certain typ

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      All file systems are approximately the same for most day to day users. I would be interested in knowing which is fastest at read/writes.

      And that's meaningless without specifying the hardware you're doing the comparison on, your access pattern(s), file system layout, data distribution within the file system, and other factors.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      I think you don't know what ZFS really is. It's a very different deal than etx4, ufs etc... It is the file system that made HW raid controllers obsolete. Even with a single disk setup you get a lot of features that you don't have on most of the other FS. It is a big deal just because of cheap snapshots, and data integrity checks.
      And no, BTRFS is not even close... yet.

      • by Curunir_wolf ( 588405 ) on Thursday February 18, 2016 @09:27AM (#51534209) Homepage Journal

        I think you don't know what ZFS really is. It's a very different deal than etx4, ufs etc... It is the file system that made HW raid controllers obsolete.

        It also made just about any computer with less than 8 GB of RAM obsolete. It's also not very friendly with applications that need large chunks of RAM, like a database or large Java VM application - the ARC cache causes a lot of fragmentation and is often slow to release it when other applications need more.

        • by Aaden42 ( 198257 ) on Thursday February 18, 2016 @09:48AM (#51534369) Homepage

          On 64-bit hosts, the ARC cache is a non-issue. Java needs contiguous *virtual* memory space. Physical memory fragmentation isn't a problem w/ the MMU translating contiguous 64-bit address space to possibly non-contiguous physical pages. On 32-bit hosts, that gets dicey. On 64-bit, you've got plenty of room even w/ ARC.

          That said, I'd love to see ARC & the native Linux disk cache functionality either merge or at least have ARC behave more like the normal caching mechanism (IE free up RAM more eagerly), but it's not actually caused me significant problems on 64-bit.

        • by lewiscr ( 3314 )

          It also made just about any computer with less than 8 GB of RAM obsolete.

          a) Pick the right tool for the job.
          b) ZFS works fine without lots of RAM. Either cap the ARC, or disable it.

          I plan to use ZFS for my personal NAS. I'll have 4TiB of storage (spinners) and 2GiB of RAM. It's mostly media storage, so ARC isn't terribly useful. And ZFS will auto-disable the ARC if the machine has less than 4TiB of RAM. Sure, it's not going to set any benchmarks records, but I don't need it to. Streaming media at the home scale isn't taxing for modern PCs.

          It's also not very friendly with applications that need large chunks of RAM, like a database or large Java VM application

          I love ZFS for my database ser

          • Interesting.

            I plan to use ZFS for my personal NAS. I'll have 4TiB of storage (spinners) and 2GiB of RAM.

            So are you using a Linux distro? I looked at doing something similar, but FreeNAS now needs 8 GB of RAM. I just want something like a home-built Synology. Small and efficient. I had pretty much ruled out using ZFS though.

            I love ZFS for my database servers. It plays very well with PostgreSQL

            I wasn't aware of that, but I don't use PostgreSQL for anything except one application, and it requires a LOT of resources. Most of my small stuff is SQLite and legacy MySQL. ZFS, I think, would kill MySQL.

            It's beautiful for RAID1.

            Seems like overkill for that, IMHO, but I've only had 1 failure on

            • I looked at doing something similar, but FreeNAS now needs 8 GB of RAM.

              Meh. I played around with FreeNAS for a while but wasn't too impressed. It makes some things easy. If the preconfigured setup does everything you need, great, but if you need access to more configuration options/customization (like I did), it's a pita. Just grab a copy of zfsonlinux and role your own. It's really not much more complicated than any other filesystem+lvm setup.

      • and none of that matters because Ubuntu's grub-efi doesn't even know how to boot from XFS, let alone from ZFS.

        creating a separate /boot partition with ext4 defeats the purpose of ZFS and its most useful feature of boot environments. ZFS likes its disks whole, not partitioned.

        • by fnj ( 64210 )

          creating a separate /boot partition with ext4 defeats the purpose of ZFS

          Utter bullshit. I have one ZFS server that is root-on-ZFS, and one with an etx4 root and boot drive. They are both equally useful and performant. Sure, you can do interesting things with root-on-ZFS, but after experiencing both, I am fairly well decided on balance I prefer not using it.

    • by UnknownSoldier ( 67820 ) on Thursday February 18, 2016 @09:12AM (#51534115)

      > I would be interested in knowing which is fastest at read/writes.

      Ignoring the fact that this is a HIGHLY ambiguous question, i.e. you don't specify _which_ RAID setting, here are some benchmarks:

      = 2010 =
      http://www.zfsbuild.com/2010/0... [zfsbuild.com]

      = 2013 =
      ZFS On Linux 3.8 Kernel, ZOL 0.6.1
      https://openbenchmarking.org/r... [openbenchmarking.org]

      = 2015 =
      A PERFORMANCE COMPARISON OF ZFS AND BTRFS ON LINUX
      * https://www.diva-portal.org/sm... [diva-portal.org]

      • by epine ( 68316 )

        These benchmarks are sensitive to extremely subtle differences in how each file system interpret safety semantics, which unfortunately none of these "benchmark" utilities actually check.

        By "subtle" I mean just a scattered handful of sunflower seeds, which may (or may notâ"don't look at the light!) attract the attention of the Black Swan of Extreme Face Melt.

        One thing I read a while back explained how rigorous NFS semantics were pretty much guaranteed to cut your benchmark results in half, compared to h

        • I completely agree! Benchmarks only test performance and (usually) completely ignore correctness.

          "It doesn't matter HOW fast you read/write if the data is WRONG" (The whole point of a FS (File System) is to guaranteed the data is valid!)

          Benchmarks are not one-dimensional. We need to graph multiple axes:

          * Correctness
          * Throughput
          * Latency
          * IOPS
          etc.

          Where is the benchmark that demonstrates how well the FS handles "reboot -t now" right in the middle of writing a huge block of data??
          Where is the benchmark that

    • by The-Ixian ( 168184 ) on Thursday February 18, 2016 @10:01AM (#51534467)

      I used MythTV for years as a DVR and I tried a lot of different file systems.

      The 2 that always worked the best were JFS and XFS for the sole reason that large file deletes took almost no time at all. Compared to several seconds or even minutes with other file systems.

      • I used MythTV for years as a DVR and I tried a lot of different file systems.

        The 2 that always worked the best were JFS and XFS for the sole reason that large file deletes took almost no time at all. Compared to several seconds or even minutes with other file systems.

        Yup. On my MythTV system, I use XFS on the filesystem hosting the video files and ext on the other filesystems.

        • by lewiscr ( 3314 )
          XFS was designed to be a media filesystem, when SGI wrote it for Irix. It's a good fit.
          I plan to use ZFS for my media storage, but there is one important consideration. ZFS does NOT like to be more than 80% full. If you're planning to fill the disks greater than 80%, stick with XFS. I'm not, so I'm going with ZFS. XFS still has issues in this scenario, but it's not as bad as ZFS.
      • I've also found JFS to work best for me on a MythTV box. Aside from the file deletion issue, there's also streaming throughput during recording (especially if the storage is accessed via NFS). As much of a ZFS fan as I may be, I prefer using the best tool for the job, and MythTV is where JFS shines.

    • I had an incident where my photos folder suffered silent filesystem corruption. Fortunately, my backup tool (Unison) does enough file comparisons and did not brainlessly overwrite the undamaged images still in backup but instead flagged it as a conflict. It taught me a lesson about what is "good enough" for day-to-day users. Just like a lightning strike taught me about off-site backup for day-to-day users.

    • Not at all. ZFS with a nice GUI can be extremely useful for home users. The ability to custom tailor compression per folder, or snapshots for easy backup and quick retrieval of overwritten files. All that doesn't have to be enterprise only, it can have profoundly positive impact on general users day to day.
    • All file systems are approximately the same for most day to day users.

      Day to day users tolerate silent corruption of their data?

      That's news to me.

      Full disclosure: I'm a day to day user running ZFS because I've lost pieces of data to silent corruption before.

    • by fnj ( 64210 )

      All file systems are approximately the same for most day to day users.

      Possibly you could make that argument if all ZFS was, was a file system. That's not the case, though. ZFS is a fully integrated file system and logical volume manager, complete with built-in RAID facilities far more advanced than those available otherwise. Another vast advantage is the ability to create and destroy hierarchical file systems (not just directories) at any time during operation without interrupting operation. The creation is

    • Home users use Windows so this does apply

  • BTRFS (Score:5, Interesting)

    by ssam ( 2723487 ) on Thursday February 18, 2016 @08:13AM (#51533861)
    I'll stick with BTRFS thanks. It gives me all those features, is GPL and has been trouble free for me on many TB of disks for several years.
    • Re: (Score:2, Informative)

      by fbobraga ( 1612783 )
      It's a promising fs, but is not very stable now: I've tried BTRFS in a netbook (with Arch): it corrupted a micro-SD disk so many times that I've gived up and used ext4 (from it: a never have considered to use BTRFS in production systems yet like I do with ZFS])
      • Btrfs is the default filesystem on my phone [jolla.com], and it's worked just fine up to now, for me and for most other users. The only issue I had is that it needs rebalancing every now and then, and the cron job is set up to do it only if the phone is connected to a charger when it starts (but that is an issue with the phone developers, not with btrfs).
    • Re: (Score:2, Informative)

      by Anonymous Coward

      I'll stick with BTRFS thanks. It gives me all those features, is GPL and has been trouble free for me on many TB of disks for several years.

      Encryption? Oh yeah: [kernel.org]

      Btrfs does not support native file encryption (yet), and there's nobody actively working on it. It could conceivably be added in the future.

      "Nobody actively working on it" is a big problem with BTRFS.

      BTRFS comes from Oracle - pre-Sun purchase. It was Oracle's answer to ZFS. And now Oracle owns ZFS and doesn't need a copy of the original. It's not quite abandonware, but the central impetus for it's creation and advancement is gone.

      And most of all:

      Is btrfs stable?

      Short answer: Maybe.

      Ouch. That's the official BTRFS wiki page.

      • by Bert64 ( 520050 )

        So given that Oracle creates btrfs as a competitor to zfs because the latter used a license incompatible with the linux kernel, and now they own zfs, why wouldnt they just gpl (or dual license) zfs and forget about btrfs?

        • by Aaden42 ( 198257 )
          Oracle considers ZFS a competitive advantage. It's their answer to NetApp's WAFL. Not sure the reasoning behind creating btrfs (other than possibly just merger schedules resulting in them owning both), but it's very likely they consider the GPL/CDDL incompatibility and resulting copyright FUD/trolling to be a feature. Having an in-tree ZFS module on Linux isn't something Oracle wants to see.
        • by lewiscr ( 3314 )
          Oracle doesn't give away anything they can sell.

          ZFS v28 was the last version that was open source, by Sun.
          Oracle is still developing newer versions of ZFS, but they are closed source.
          I believe ZFS is available in Oracle Linux, but I haven't verified that. I'm not sure how they get around the licensing issues.
      • Lies (Score:4, Interesting)

        by dlenmn ( 145080 ) on Thursday February 18, 2016 @09:47AM (#51534361)

        It's not quite abandonware, but the central impetus for it's creation and advancement is gone.

        I wasn't planning to comment on this thread, but this is too big a lie to let stand -- unless by "not quite abaondonware" you mean "has absolutely nothing in common with abandonware besides being a type of software". Oracle was never the sole developer, and now that Oracle has lost interest, the developers just moved to other companies and kept doing the same thing. Its raison d'etre remains to provide an advanced filesystem that's easily integrated with linux, which for better or worse means being licensed under the GPL or something compatible.

        As for encryption, yeah that would be nice to have, but it's not like zfs has all the features btrfs has. I'll take btrfs's online balancing (ability to add and remove drives at will) over built in encryption, but I realize that's a personal choice.

        Finally, let's actually quote the FAQ correct only stability:

        Short answer: Maybe.

        Long answer: Nobody is going to magically stick a label on the btrfs code and say "yes, this is now stable and bug-free". Different people have different concepts of stability: a home user who wants to keep their ripped CDs on it will have a different requirement for stability than a large financial institution running their trading system on it. If you are concerned about stability in commercial production use, you should test btrfs on a testbed system under production workloads to see if it will do what you want of it. In any case, you should join the mailing list (and hang out in IRC) and read through problem reports and follow them to their conclusion to give yourself a good idea of the types of issues that come up, and the degree to which they can be dealt with. Whatever you do, we recommend keeping good, tested, off-system (and off-site) backups.

        Pragmatic answer: (2012-12-19) Many of the developers and testers run btrfs as their primary filesystem for day-to-day usage, or with various forms of real data. With reliable hardware and up-to-date kernels, we see very few unrecoverable problems showing up. As always, keep backups, test them, and be prepared to use them.

        For all practical purposes, btrfs is stable. Everything they say in the long answer basically applies to linux in general (unless you have a support contract with Red Hat or the likes).

        • by Bengie ( 1121981 )
          Open ZFS doesn't have encryption because it's low priority since all of the OSes have proper bootable transparent full-disk encryption.
      • by Aaden42 ( 198257 )

        ZFS on Linux doesn't support native encryption yet either. ZFS on Solaris does, but that code was added after OpenSolaris was killed and has never been released under a clearly CDDL license. (It *has* been leaked, but not with clear CDDL license assignment, thus nobody in their right mind has touched it.)

        You *can* easily do ZFS on LUKS-based encryption on Linux. It works great, but it's a very different thing with a different feature set than native ZFS encryption. Native ZFS crypto allows encrypting

      • Re:BTRFS (Score:4, Interesting)

        by ssam ( 2723487 ) on Thursday February 18, 2016 @09:57AM (#51534447)
        I am using BTRFS on luks on my laptop. Even during a motherboard failure that cause repeated hard poweroffs I did not loose any data (and thanks to data checksumming I know that there is no corruption lurking in the files). BTRFS has developers at Facebook, Fujitsu, SUSE, IBM and still gets patches from people at Oracle. Seems a fairly healthy project to me.
    • Re:BTRFS (Score:5, Informative)

      by Anonymous Coward on Thursday February 18, 2016 @08:41AM (#51533985)

      As a die hard BTRFS user that chases kernel releases like a addict chases crack, I can't help but say that there are still some annoying issues out there.

      While none have given me data loss, you'll get the occasional deadlock from a set of kthreads that do compression or a severe slowdown with next to no disk I/O and big WAITIO (usually get 16.xx Load in such cases on a quad core machine). For the slowdown case you'll get a speed drop from 150MB/s to 900~KB/s on spinning rust for a couple of minutes. Happens only after heavy use in the range of 2+TB written with forced compression.

      ENOSPC? Not on my end. Trying to copy a file and running out of space results in WAITIO through the roof while BTRFS tries to find free space. I've had a job that stalled and thrashed the hard drive for 9 hours while it tried to recover space. At no point did it simply kill the transfer due to out of space, btrfs usage showed around 1GB of space left with plenty for metadata. It's at 1GB free for data extents and that's what kills the whole deal. You can't use that last 1GB, you'll just deadlock until some space is recovered by deleting files manually. Happens every time, just make sure to transfer something that is larger than the available free space and watch it suffer.

      All this with Linux kernel 4.4.2. Looking at the various mailing lists with regular posts from people with obscure problems I've never encountered before, can't really say it's on par with ZFS stability. And ZFS On Linux is still missing a few things last I checked from the true ZFS implementation, but it's usable. Can't comment on ZOL long term stability, but I would feel comfortable enough using it instead of BTRFS for say a production server.

      • by lewiscr ( 3314 )
        That sounds better than ZFS. If you actually manage to fill one up 100%, you're (probably) screwed. Due to it's Copy On Write implementation, deleting a file requires free space.
        If you have some snapshots, you can drop those to free up some space. If you don't have snapshots to drop, your only option to recover is to enlarge the volume. You can either add another RAID extent (which you can't ever remove), or replace all of the disks with larger disks and expand.
  • For containers (Score:5, Informative)

    by DrYak ( 748999 ) on Thursday February 18, 2016 @08:15AM (#51533869) Homepage

    More precisely, the blog bost is about using ZFS' copy-on-write (CoW) capabilities in the context of linux containers [wikipedia.org].
    (thin virtualized machines. The guest share the same kernel as the host, but the userspace is separated and compartmentalized using the kernel's cgroup feature.
    Similar to BSD Jails and Solaris containers.
    Think like a chroot, except extented to all the other concepts beside file system).

    The fast and easy snapshoting that come with CoW filesystems like ZFS (or BTRFS for that matters) makes it very easy to spin new virtualized containers simply by snapshoting the subtree holding the empty template, while wasting only minimal resource (only the differences are stored as the two copies diverge over time).

    • For containers (at least on FreeBSD) its far better to have one base install of the OS like you like it, and just use nullfs mounts to overlay that with a writable directory for each container.

      Where ZFS's snapshots and clones will kick total ass is KVM virtual machines.

      In either situation, at least on FBSD, you can allow the guest container/vm to manage their own ZFS, stays part of your larger pool and works as expected, but the children can create snapshots, clone, and filesystems in their little portion o

    • Except that for running containers you really want BTRFS rather than ZFS, as having many containers means you use your memory. ZFS shines on a file server where it can use all memory for itself and there's no need to actually use the page for a running process -- EXT4, BTRFS and any other well-behaved filesystem will share a page with processes that mmap it.

    • Indeed. I do this on my company's internal VM server with LXC+BTRFS and it is amazing. Once we get one system setup we can snapshot it and make many copies. Even better is for working with old versions. Since we keep a copy of each old image, if something comes up its easy to spin up a clone of an old version and reproduce the customers issue.
  • by Parker Lewis ( 999165 ) on Thursday February 18, 2016 @08:19AM (#51533887)
    12.04 and 14.04 were kind of previous versions with updated programs, nice polished and updated drivers. But 16.04 will have exciting new stuff: privacy enabled by default, ZFS, new software centre, first LTS with systemd (yeap, mind that, I like it!) and kernel 4.
  • by BaronM ( 122102 ) on Thursday February 18, 2016 @08:24AM (#51533907)

    Every time I see news about ZFS and Linux, it's a little bit less of a mess. Eventually, I expect that all of the major distributions will go this route and sidestep the licensing issue by providing distro-supported modules that are installed by user request, sort of like the way that Nvidia drivers are provided.

    • by fnj ( 64210 )

      Every time I see news about ZFS and Linux, it's a little bit less of a mess.

      I've been using ZFS on Linux for years, and the only thing that ever came close to being a mess was the horseshit called DKMS. Under CentOS6 (and I suspect any other linux), it absolutely insisted on building a mess of something it called "weak-modules" when I updated the kernel. These are nothing more than a mess of symlinks to the old module, and they kept breaking my system. I never found a workable way to prevent their creation,

  • ZFS is seriously cool in many ways, but you pay for that with some pretty significant RAM requirements for a file system driver. If I remember correctly, you need about 8GB of RAM to really make use of ZFS. I think it's great that they're including it with the distribution, but it wouldn't make sense to have this as the default file system. At least not until the average system out there is running with 16GB of RAM.

    • > ZFS is seriously cool in many ways,

      Indeed. Does anyone have an updated version of this ZFS vs BTRFS table cheat/sheat?

      http://www.seedsofgenius.net/s... [seedsofgenius.net]

      > but you pay for that with some pretty significant RAM requirements for a file system driver.
      > If I remember correctly, you need about 8GB of RAM to really make use of ZFS.

      Yeah ZFS could be considered "bloated" but you get so many SWEET benefits.

      Personally, I'd recommend having at least 8 GB of RAM solely just for ZFS RAIDZ1/RAIDZ2, leaving the o

    • by guruevi ( 827432 )

      Which Server isn't running 16GB at least these days? Even a cheap dedicated server comes with it default. And ZFS doesn't actually require all that much RAM, it does better (read caching etc) with it and it requires it a lot for dedupe but a 'standard' file system can easily go with 1 or 2GB of RAM).

      • by Aaden42 ( 198257 )

        Do not, repeat DO NOT ENABLE DE-DUPE unless you have gargantuan amounts of RAM.

        Rule of thumb is 5GB of RAM per 1TB of ZFS data: http://constantin.glez.de/blog... [constantin.glez.de]

        If you ever enable dedupe on a pool, it's on forever. You can't actually turn off the extra RAM requirements since there *could* be de-duped blocks, and ZFS must check for those on every pool import. On a system with insufficient RAM, it's possible to end up with a pool that can take hours or days to import with no indication that it's actuall

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          Also don't enable dedup if you have media with a nontrivial seek time. It's tolerable on flash (but you do lots of extra I/O on write) but the deduplication table (DDT) tends to develop a random layout with respect to device LBAs, and the DDT needs to be consulted on each write to a dataset/zvol with dedup enabled, and it also needs to be scanned *first* during scrubs and resilvers. DDTs with millions of entries can require hundreds of thousands of random I/Os, which means hundreds of thousands of se

    • Deduplication is the RAM hog (it's also a performance killer). I run a ZFS file server (FreeNAS) on just 5 GB of RAM with deduplication off, despite FreeNAS recommending 8 GB. I haven't had any problems (in fact the logs rarely show it going above 2-3 GB of usage - only when I copy a huge number of files to it at once does it go above that).

      Deduplication sounds cool and all, but if you don't have a heavy need for it (e.g. you're running 20 identical virtual machines with their files stored on ZFS), jus
    • If I remember correctly, you need about 8GB of RAM to really make use of ZFS.

      No. ZFS has performance advantages with more RAM available to ARC cache, and file de-duplication is incredibly RAM intensive, but both are configurable / optional features.

      You can still get all the wonderful features of a modern CoW filesystem with data integrity checks with even a tiny amount of RAM. The minimum ZFS needs is 1GB almost to itself to run well, and that should be achievable on even the most basic of systems (16GB of ECC RAM is like $100, and if you're not willing to spend even 1/3rd of that t

  • I used to use Ubuntu for my file server. Since the desktop motherboard didn't have a built-in video, I used an old Nvidia video card to configure the installation. Every time Ubuntu did an automatic upgrade of the video driver, it hosed the installation and I would have reinstall. I switched over to FreeNAS five years ago and haven't looked back. I've rebuilt my server box last year to meet the beefier hardware requirements for ZFS. Bad things will have if you run ZFS on less than adequate hardware.
    • by LichtSpektren ( 4201985 ) on Thursday February 18, 2016 @09:37AM (#51534273)
      I used the wrong tool for the job, therefore it sucks.
      • I used the wrong tool for the job, therefore it sucks.

        Please explain what "tool" I should use to turn an old PC with a Nvidia video card into a file server underneath my desk at home?

        From 1997 to 2010, I've used Linux and SAMBA. Every now and then, the installation got hosed. In the early days, it was compiling the kernel the wrong way. With Ubuntu, it was the video drive upgrade for the Nvidia card and happened so frequently that I had the installation steps memorized.

        From 2010, I've used FreeNAS. A USB stick would go bad and hosed the installation from time

        • I used the wrong tool for the job, therefore it sucks.

          Please explain what "tool" I should use to turn an old PC with a Nvidia video card into a file server underneath my desk at home?

          From 1997 to 2010, I've used Linux and SAMBA. Every now and then, the installation got hosed. In the early days, it was compiling the kernel the wrong way. With Ubuntu, it was the video drive upgrade for the Nvidia card and happened so frequently that I had the installation steps memorized.

          From 2010, I've used FreeNAS. A USB stick would go bad and hosed the installation from time to time. Formatting a new USB stick, installing FreeNAS and copying over the backup config file took five minutes.

          Well, your original post said you didn't have a video card, so at that point you should've used Ubuntu Server or Debian or something else that doesn't require X.org. However, you used an old video card so you could get a GUI going. Fine. But after the second or third time a driver update broke it, you should've at that point began to decline the driver updates--you said it was a file server, right? You don't need the latest gfx drivers for that.

          • But after the second or third time a driver update broke it, you should've at that point began to decline the driver updates

            I did. Ubuntu still saw that I had Nvidia video installed in the system and managed to hose the installation anyway. I don't know if this process got any better since 2010, but this was a PITA at the time I was using it.

            you said it was a file server, right? You don't need the latest gfx drivers for that.

            This was my only Linux box at the time. Recruiters told me I needed to Linux GUI experience on my resume. The problem with that was I prefer minimalist windows managers to open terminal and web browser windows, what recruiters really wanted was Red Hat GUI experience as specified on their re

    • by ssam ( 2723487 ) on Thursday February 18, 2016 @10:19AM (#51534619)
      Why do you need the closed nvidia driver on a server? Nouveau should be fine or even just the vesa driver. (I could say why do you even need a video card on a server, but I guess some folk prefer that to using ssh or a serial connection from a laptop)
      • Nouveau should be fine or even just the vesa driver.

        I don't know if that was an option five to ten years ago.

        I could say why do you even need a video card on a server, but I guess some folk prefer that to using ssh or a serial connection from a laptop

        If the installation got hosed from the video update, SSH wasn't going to work. The only way to diagnose and reinstall Ubuntu was with the video plug. Keep in mind that this was an old PC underneath my desk at home.

  • As far as I know, ZFS is the only file system that can deal with duplicating hard links appropriately. Say you have a backup pool that works with hard links, and you want a clone of that pool, very difficult with anything else.

Technology is dominated by those who manage what they do not understand.

Working...