Ubuntu 16.04 LTS To Have Official Support For ZFS File System (dustinkirkland.com) 191
LichtSpektren writes: Ubuntu developer Dustin Kirkland has posted on his blog that Canonical plans to officially support the ZFS file system for the next Ubuntu LTS release, 16.04 "Xenial Xerus." The file system, which originates in Solaris UNIX, is renowned for its feature set (Kirkland touts "snapshots, copy-on-write cloning, continuous integrity checking against data corruption, automatic repair, efficient data compression") and its stability. "You'll find zfs.ko automatically built and installed on your Ubuntu systems. No more DKMS-built modules!" N.B. ext4 will still be the default file system due to the unresolved licensing conflict between Linux's GPLv2 and ZFS's CDDL.
For home users, basically meaningless. (Score:4, Insightful)
Re: For home users, basically meaningless. (Score:3)
Large files brtfs or xfs.
For millions of small files...ext4
Re: (Score:3)
The main drawback with zfs is, it does not have a repairing fsck and never will have one. The koolaid you are supposed to drink is that raid will fix any corruption, so if anything ever does go wrong, and that would include bugs, random memory bit flips, multiple disk errors (lightning storm anyone?) and any number of other hazards that defeat raid recovery, zfs is just screwed and won't even attempt to get back the data that is most probably still sitting there, mostly intact.
If you need snapshots and remo
Re: (Score:2)
Um, what?
zfs scrub
Just because it isn't called fsck doesn't mean it doesn't have one.
Re: (Score:2)
Um, what?
zfs scrub
Just because it isn't called fsck doesn't mean it doesn't have one.
Zfs scrub is just a raid repair, it does not understand the structure of the filesystem and therefore is incapable of repairing inconsistencies, or detecting any inconsistency that does not show up as a raid checksum failure. Zfs scrub is definitely not a repairing fsck, and it is beyond me why zfs boosters like to lie about that, or fool themselves.
Re: (Score:2)
Zfs scrub is just a raid repair
Let's call it online block-level filesystem integrity checking and repair using a redundant copy if it is available. Simply saying it's "just a raid repair" understates what it actually does. With scrubbing, you can detect and fix silent data corruption, which is something you cannot do with traditional fsck.
it does not understand the structure of the filesystem and therefore is incapable of repairing inconsistencies
You will have to be more specific about what you mean. You are correct that it does not repair metadata inconsistencies, but that is because those are handled in a different way. The #1 thing fsck does
Re: (Score:2)
Can you run into an obscure zfs bug that causes you to unrecoverably lose your entire pool? Yes, of course, which is why you need good backups in every case.
So obviously you understand that in some cases of corruption, Ext4 can recover useful data while with Zfs your only option is to restore from backup. Now try to explain to any Ext4 user who has successfully rescued data (and there are many, including me) why they should give up this safety net in the name of some psychobabble about how raid repair is just as good as fsck repair, if only you had proper backups. Oops.
Re: (Score:2)
You cannot recover from bad ext4 superblocks with fsck, so how is this any different from the equally unlikely scenario of a zfs bug resulting in an unrecoverable zpool? You can recover data from ext4 using low-level tools, yes, but I have no reason to believe you can't also do this with zfs if you are comfortable with the underlying filesystem structure and know where to look. Most people just don't bother because restoring from backup is easier.
why they should give up this safety net in the name of some psychobabble about how raid repair is just as good as fsck repair,
You missed it. Scrubbing is better than fsck, for certain typ
Re: (Score:3)
ZFS does not need a fsck utility because it cannot break like other filesystems.
Excellent sense of humour :)
Re: For home users, basically meaningless. (Score:4, Insightful)
If you are using ZFS, you need to have the offline backup.
So many things can go wrong with ZFS due to failures beyond your control. You use ZFS so you don't have to restore, and keep an offline backup for when ZFS is fucked.
If you can't afford to offline your ZFS data, ZFS is not for you.
Re: For home users, basically meaningless. (Score:2)
It makes perfect sense when you consider how man people deploy zfs without understanding the implications
Of course you should have offline backup of all your data. And yet people archive dozens of TBs...deploy zfs for speed and error avoidance...but don't have the solution to back it all up. But yeah I'm just an idiot for stating the obvious that zfs Is not bulletproof.
Re: (Score:2)
Again an un-proofread post. Sigh. But at least this time I can divine what you are saying. You are right that any data not backed up is at risk. Duh.
Re: (Score:2)
JFS is also pretty fast with large files and mature.
And unmaintained.
Re: (Score:3, Insightful)
All file systems are approximately the same for most day to day users. I would be interested in knowing which is fastest at read/writes.
And that's meaningless without specifying the hardware you're doing the comparison on, your access pattern(s), file system layout, data distribution within the file system, and other factors.
Re: (Score:3, Informative)
I think you don't know what ZFS really is. It's a very different deal than etx4, ufs etc... It is the file system that made HW raid controllers obsolete. Even with a single disk setup you get a lot of features that you don't have on most of the other FS. It is a big deal just because of cheap snapshots, and data integrity checks.
And no, BTRFS is not even close... yet.
Re:For home users, basically meaningless. (Score:5, Interesting)
I think you don't know what ZFS really is. It's a very different deal than etx4, ufs etc... It is the file system that made HW raid controllers obsolete.
It also made just about any computer with less than 8 GB of RAM obsolete. It's also not very friendly with applications that need large chunks of RAM, like a database or large Java VM application - the ARC cache causes a lot of fragmentation and is often slow to release it when other applications need more.
Re:For home users, basically meaningless. (Score:5, Interesting)
On 64-bit hosts, the ARC cache is a non-issue. Java needs contiguous *virtual* memory space. Physical memory fragmentation isn't a problem w/ the MMU translating contiguous 64-bit address space to possibly non-contiguous physical pages. On 32-bit hosts, that gets dicey. On 64-bit, you've got plenty of room even w/ ARC.
That said, I'd love to see ARC & the native Linux disk cache functionality either merge or at least have ARC behave more like the normal caching mechanism (IE free up RAM more eagerly), but it's not actually caused me significant problems on 64-bit.
Re: (Score:3)
It also made just about any computer with less than 8 GB of RAM obsolete.
a) Pick the right tool for the job.
b) ZFS works fine without lots of RAM. Either cap the ARC, or disable it.
I plan to use ZFS for my personal NAS. I'll have 4TiB of storage (spinners) and 2GiB of RAM. It's mostly media storage, so ARC isn't terribly useful. And ZFS will auto-disable the ARC if the machine has less than 4TiB of RAM. Sure, it's not going to set any benchmarks records, but I don't need it to. Streaming media at the home scale isn't taxing for modern PCs.
It's also not very friendly with applications that need large chunks of RAM, like a database or large Java VM application
I love ZFS for my database ser
Re: (Score:2)
Interesting.
I plan to use ZFS for my personal NAS. I'll have 4TiB of storage (spinners) and 2GiB of RAM.
So are you using a Linux distro? I looked at doing something similar, but FreeNAS now needs 8 GB of RAM. I just want something like a home-built Synology. Small and efficient. I had pretty much ruled out using ZFS though.
I love ZFS for my database servers. It plays very well with PostgreSQL
I wasn't aware of that, but I don't use PostgreSQL for anything except one application, and it requires a LOT of resources. Most of my small stuff is SQLite and legacy MySQL. ZFS, I think, would kill MySQL.
It's beautiful for RAID1.
Seems like overkill for that, IMHO, but I've only had 1 failure on
Re: (Score:2)
I looked at doing something similar, but FreeNAS now needs 8 GB of RAM.
Meh. I played around with FreeNAS for a while but wasn't too impressed. It makes some things easy. If the preconfigured setup does everything you need, great, but if you need access to more configuration options/customization (like I did), it's a pita. Just grab a copy of zfsonlinux and role your own. It's really not much more complicated than any other filesystem+lvm setup.
Re: (Score:2)
I guess I should have said "The FreeNAS Community page states under Minimum Hardware Requirements that at least 8 GB of RAM are required for current version of FreeNAS."
That better?
Re: For home users, basically meaningless. (Score:2)
That sentence is what put me off using it at home but maybe I should have another look. I took over support for a production system with ZFS not long after it came out and didn't really trust it but it never failed and had all those great features: easy to expand, constant fs checking, volumes, snapshots etc. Sun had some brilliant engineers.
Re: (Score:2)
and none of that matters because Ubuntu's grub-efi doesn't even know how to boot from XFS, let alone from ZFS.
creating a separate /boot partition with ext4 defeats the purpose of ZFS and its most useful feature of boot environments. ZFS likes its disks whole, not partitioned.
Re: (Score:2)
Utter bullshit. I have one ZFS server that is root-on-ZFS, and one with an etx4 root and boot drive. They are both equally useful and performant. Sure, you can do interesting things with root-on-ZFS, but after experiencing both, I am fairly well decided on balance I prefer not using it.
Re:For home users, basically meaningless. (Score:4, Informative)
> I would be interested in knowing which is fastest at read/writes.
Ignoring the fact that this is a HIGHLY ambiguous question, i.e. you don't specify _which_ RAID setting, here are some benchmarks:
= 2010 =
http://www.zfsbuild.com/2010/0... [zfsbuild.com]
= 2013 =
ZFS On Linux 3.8 Kernel, ZOL 0.6.1
https://openbenchmarking.org/r... [openbenchmarking.org]
= 2015 =
A PERFORMANCE COMPARISON OF ZFS AND BTRFS ON LINUX
* https://www.diva-portal.org/sm... [diva-portal.org]
Re: (Score:3)
These benchmarks are sensitive to extremely subtle differences in how each file system interpret safety semantics, which unfortunately none of these "benchmark" utilities actually check.
By "subtle" I mean just a scattered handful of sunflower seeds, which may (or may notâ"don't look at the light!) attract the attention of the Black Swan of Extreme Face Melt.
One thing I read a while back explained how rigorous NFS semantics were pretty much guaranteed to cut your benchmark results in half, compared to h
Re: (Score:2)
I completely agree! Benchmarks only test performance and (usually) completely ignore correctness.
"It doesn't matter HOW fast you read/write if the data is WRONG" (The whole point of a FS (File System) is to guaranteed the data is valid!)
Benchmarks are not one-dimensional. We need to graph multiple axes:
* Correctness
* Throughput
* Latency
* IOPS
etc.
Where is the benchmark that demonstrates how well the FS handles "reboot -t now" right in the middle of writing a huge block of data??
Where is the benchmark that
Re:For home users, basically meaningless. (Score:5, Interesting)
I used MythTV for years as a DVR and I tried a lot of different file systems.
The 2 that always worked the best were JFS and XFS for the sole reason that large file deletes took almost no time at all. Compared to several seconds or even minutes with other file systems.
Re: (Score:2)
I used MythTV for years as a DVR and I tried a lot of different file systems.
The 2 that always worked the best were JFS and XFS for the sole reason that large file deletes took almost no time at all. Compared to several seconds or even minutes with other file systems.
Yup. On my MythTV system, I use XFS on the filesystem hosting the video files and ext on the other filesystems.
Re: (Score:2)
I plan to use ZFS for my media storage, but there is one important consideration. ZFS does NOT like to be more than 80% full. If you're planning to fill the disks greater than 80%, stick with XFS. I'm not, so I'm going with ZFS. XFS still has issues in this scenario, but it's not as bad as ZFS.
JFS and MythTV (Score:2)
I've also found JFS to work best for me on a MythTV box. Aside from the file deletion issue, there's also streaming throughput during recording (especially if the storage is accessed via NFS). As much of a ZFS fan as I may be, I prefer using the best tool for the job, and MythTV is where JFS shines.
Re: (Score:3)
I had an incident where my photos folder suffered silent filesystem corruption. Fortunately, my backup tool (Unison) does enough file comparisons and did not brainlessly overwrite the undamaged images still in backup but instead flagged it as a conflict. It taught me a lesson about what is "good enough" for day-to-day users. Just like a lightning strike taught me about off-site backup for day-to-day users.
Re: (Score:2)
Re: (Score:2)
All file systems are approximately the same for most day to day users.
Day to day users tolerate silent corruption of their data?
That's news to me.
Full disclosure: I'm a day to day user running ZFS because I've lost pieces of data to silent corruption before.
Re: (Score:2)
Possibly you could make that argument if all ZFS was, was a file system. That's not the case, though. ZFS is a fully integrated file system and logical volume manager, complete with built-in RAID facilities far more advanced than those available otherwise. Another vast advantage is the ability to create and destroy hierarchical file systems (not just directories) at any time during operation without interrupting operation. The creation is
Re: For home users, basically meaningless. (Score:2)
Home users use Windows so this does apply
BTRFS (Score:5, Interesting)
Re: (Score:2, Informative)
Re: (Score:2)
Re: (Score:2, Informative)
I'll stick with BTRFS thanks. It gives me all those features, is GPL and has been trouble free for me on many TB of disks for several years.
Encryption? Oh yeah: [kernel.org]
Btrfs does not support native file encryption (yet), and there's nobody actively working on it. It could conceivably be added in the future.
"Nobody actively working on it" is a big problem with BTRFS.
BTRFS comes from Oracle - pre-Sun purchase. It was Oracle's answer to ZFS. And now Oracle owns ZFS and doesn't need a copy of the original. It's not quite abandonware, but the central impetus for it's creation and advancement is gone.
And most of all:
Is btrfs stable?
Short answer: Maybe.
Ouch. That's the official BTRFS wiki page.
Re: (Score:3)
So given that Oracle creates btrfs as a competitor to zfs because the latter used a license incompatible with the linux kernel, and now they own zfs, why wouldnt they just gpl (or dual license) zfs and forget about btrfs?
Re: (Score:3)
Re: (Score:2)
ZFS v28 was the last version that was open source, by Sun.
Oracle is still developing newer versions of ZFS, but they are closed source.
I believe ZFS is available in Oracle Linux, but I haven't verified that. I'm not sure how they get around the licensing issues.
Lies (Score:4, Interesting)
It's not quite abandonware, but the central impetus for it's creation and advancement is gone.
I wasn't planning to comment on this thread, but this is too big a lie to let stand -- unless by "not quite abaondonware" you mean "has absolutely nothing in common with abandonware besides being a type of software". Oracle was never the sole developer, and now that Oracle has lost interest, the developers just moved to other companies and kept doing the same thing. Its raison d'etre remains to provide an advanced filesystem that's easily integrated with linux, which for better or worse means being licensed under the GPL or something compatible.
As for encryption, yeah that would be nice to have, but it's not like zfs has all the features btrfs has. I'll take btrfs's online balancing (ability to add and remove drives at will) over built in encryption, but I realize that's a personal choice.
Finally, let's actually quote the FAQ correct only stability:
Short answer: Maybe.
Long answer: Nobody is going to magically stick a label on the btrfs code and say "yes, this is now stable and bug-free". Different people have different concepts of stability: a home user who wants to keep their ripped CDs on it will have a different requirement for stability than a large financial institution running their trading system on it. If you are concerned about stability in commercial production use, you should test btrfs on a testbed system under production workloads to see if it will do what you want of it. In any case, you should join the mailing list (and hang out in IRC) and read through problem reports and follow them to their conclusion to give yourself a good idea of the types of issues that come up, and the degree to which they can be dealt with. Whatever you do, we recommend keeping good, tested, off-system (and off-site) backups.
Pragmatic answer: (2012-12-19) Many of the developers and testers run btrfs as their primary filesystem for day-to-day usage, or with various forms of real data. With reliable hardware and up-to-date kernels, we see very few unrecoverable problems showing up. As always, keep backups, test them, and be prepared to use them.
For all practical purposes, btrfs is stable. Everything they say in the long answer basically applies to linux in general (unless you have a support contract with Red Hat or the likes).
Re: (Score:2)
Re: (Score:2)
ZFS on Linux doesn't support native encryption yet either. ZFS on Solaris does, but that code was added after OpenSolaris was killed and has never been released under a clearly CDDL license. (It *has* been leaked, but not with clear CDDL license assignment, thus nobody in their right mind has touched it.)
You *can* easily do ZFS on LUKS-based encryption on Linux. It works great, but it's a very different thing with a different feature set than native ZFS encryption. Native ZFS crypto allows encrypting
Re:BTRFS (Score:4, Interesting)
Re:BTRFS (Score:5, Informative)
As a die hard BTRFS user that chases kernel releases like a addict chases crack, I can't help but say that there are still some annoying issues out there.
While none have given me data loss, you'll get the occasional deadlock from a set of kthreads that do compression or a severe slowdown with next to no disk I/O and big WAITIO (usually get 16.xx Load in such cases on a quad core machine). For the slowdown case you'll get a speed drop from 150MB/s to 900~KB/s on spinning rust for a couple of minutes. Happens only after heavy use in the range of 2+TB written with forced compression.
ENOSPC? Not on my end. Trying to copy a file and running out of space results in WAITIO through the roof while BTRFS tries to find free space. I've had a job that stalled and thrashed the hard drive for 9 hours while it tried to recover space. At no point did it simply kill the transfer due to out of space, btrfs usage showed around 1GB of space left with plenty for metadata. It's at 1GB free for data extents and that's what kills the whole deal. You can't use that last 1GB, you'll just deadlock until some space is recovered by deleting files manually. Happens every time, just make sure to transfer something that is larger than the available free space and watch it suffer.
All this with Linux kernel 4.4.2. Looking at the various mailing lists with regular posts from people with obscure problems I've never encountered before, can't really say it's on par with ZFS stability. And ZFS On Linux is still missing a few things last I checked from the true ZFS implementation, but it's usable. Can't comment on ZOL long term stability, but I would feel comfortable enough using it instead of BTRFS for say a production server.
Re: (Score:2)
If you have some snapshots, you can drop those to free up some space. If you don't have snapshots to drop, your only option to recover is to enlarge the volume. You can either add another RAID extent (which you can't ever remove), or replace all of the disks with larger disks and expand.
Re: (Score:2, Interesting)
If you pick your file system because its GPL, you're pretty retarded. And yes, retarded is the appropriate word here.
he's picking his file system so that he complies with Copyright Law. why would you have an issue with that? i also don't understand why a Corporation (Canonical) would encourage people to ignore Copyright Law.
Re: (Score:3, Interesting)
Identify the copyright violation:
"This problem is being worked around by providing the kernel facilities through a separate kernel module, a technical solution for a legal problem that is also being employed by vendors and distributors of proprietary hardware drivers."
The GPL does not autmatically appl
Re: (Score:2, Funny)
Identify the copyright violation:
"This problem is being worked around by providing the kernel facilities through a separate kernel module, a technical solution for a legal problem that is also being employed by vendors and distributors of proprietary hardware drivers."
The GPL does not autmatically apply to anything that touches the kernel. It only applies to derivative works of a GPLed work. If they write a GPLed wrapper that is a derivative of both the kernel and the ZFS sources and chose to dual license it, then there's no need for the ZFS sources to be GPL licensed -- merely the wrapper. No GPL-code-inspired modifications, no GPL-defined derivatie work and no GPL licensing requirement. (So sad.)
For a group which worships the copyright hack that the GPL represents, it's odd that so many become so blind and incensed by anyone who dares to come up with a couter-hack to overcome some of the license's more idiotic features (i.e., it's open source, but it's not pure, GPL-certified open source, so you can't use it with our stuff). The only case that comes close to supporting GPL proponents' borg-like interpretation of the term "derviate work" is the Oracle v. Google fiasco. If that's the company that you want to keep, don't expect sympathy from me.
I had to write a module like that one.
The acronym for the module in our product was "bmrms". I forget what the "official" meaning was, but it really meant Bite Me RMS
Re: (Score:3)
The GPL does not autmatically apply to anything that touches the kernel. It only applies to derivative works of a GPLed work. If they write a GPLed wrapper that is a derivative of both the kernel and the ZFS sources and chose to dual license it, then there's no need for the ZFS sources to be GPL licensed -- merely the wrapper. No GPL-code-inspired modifications, no GPL-defined derivatie work and no GPL licensing requirement. (So sad.)
There's actually a lawsuit going on right now about this very tactic. https://sfconservancy.org/copy... [sfconservancy.org] (article source is funding the suit, so apply grains of salt as appropriate)
IANAL, but the position I've always heard (and seemingly the one this lawsuit is taking) is that the "GPL shim" is only legitimate in cases where the proprietary side it's interfacing with is the same on other platforms. In that case instead of just being an obvious attempt to run around copyleft it becomes a mere adapter to a v
Re: (Score:3)
The GPL does not autmatically apply to anything that touches the kernel. It only applies to derivative works of a GPLed work. If they write a GPLed wrapper that is a derivative of both the kernel and the ZFS sources and chose to dual license it, then there's no need for the ZFS sources to be GPL licensed -- merely the wrapper. No GPL-code-inspired modifications, no GPL-defined derivatie work and no GPL licensing requirement. (So sad.)
That's not how copyright works. If you have say a piece of code licensed for non-commercial use you can't just write a wrapper and say my commercial use application talks to a wrapper, the wrapper talks to your code so the terms don't apply. Instead it relies on a sleight of hand where the user is creating the illegal derivate through assembling bits that were acquired legally using a prepared script. Just like you can acquire a bunch of legal chemicals, start a meth lab and end up with an illegal product.
Re: (Score:3)
The parent is probably referring to the fact that CDDL is NOT compatible with the GPL.
https://lists.debian.org/debia... [debian.org]
Re: (Score:2)
Really? You're going to criticize the parent because they value freedom over pragmatism ??
Hint: He's using _Linux_ for _precisely_ that reason. The parent has _their_ reasons. Just because it doesn't agree with your ideology doesn't make it wrong _for them_, only you.
Re: (Score:2)
Re: (Score:2)
That would only apply to future versions of ZFS though. Hence why we have projects like OpenIndiana that are based off of the now closed Solaris source from when it was all CDDL (from SXCE pre-Solaris 11)
For containers (Score:5, Informative)
More precisely, the blog bost is about using ZFS' copy-on-write (CoW) capabilities in the context of linux containers [wikipedia.org].
(thin virtualized machines. The guest share the same kernel as the host, but the userspace is separated and compartmentalized using the kernel's cgroup feature.
Similar to BSD Jails and Solaris containers.
Think like a chroot, except extented to all the other concepts beside file system).
The fast and easy snapshoting that come with CoW filesystems like ZFS (or BTRFS for that matters) makes it very easy to spin new virtualized containers simply by snapshoting the subtree holding the empty template, while wasting only minimal resource (only the differences are stored as the two copies diverge over time).
Re: (Score:2)
For containers (at least on FreeBSD) its far better to have one base install of the OS like you like it, and just use nullfs mounts to overlay that with a writable directory for each container.
Where ZFS's snapshots and clones will kick total ass is KVM virtual machines.
In either situation, at least on FBSD, you can allow the guest container/vm to manage their own ZFS, stays part of your larger pool and works as expected, but the children can create snapshots, clone, and filesystems in their little portion o
Clean updates (Score:2)
To clarify this, the main advantage of (read-only) nullfs is patching/upgrading. COW doesn't help there, but with nullfs because they are all pointing at the same files, upgrading one upgrade them all.
This would work for very small updates (e.g.: where a library .so file is replaced).
- before the update, the container was pulling a library from the template
- after the update, the container is still pulling from the same location, and now the template provides the updated version.
For large scale upgrade, that touch many files, including files - like config files - that got changed in the container and thus reside in the nullfs' overlay directory.
These files won't get automatically updated. You'd be upgati
Re: (Score:2)
Except that for running containers you really want BTRFS rather than ZFS, as having many containers means you use your memory. ZFS shines on a file server where it can use all memory for itself and there's no need to actually use the page for a running process -- EXT4, BTRFS and any other well-behaved filesystem will share a page with processes that mmap it.
Re: (Score:2)
16.04 will be really exciting LTS (Score:3)
Re: (Score:2)
Re: (Score:2)
Me too. Even FreeBSD.
Re: (Score:2)
Like a train wreck in reverse (Score:5, Insightful)
Every time I see news about ZFS and Linux, it's a little bit less of a mess. Eventually, I expect that all of the major distributions will go this route and sidestep the licensing issue by providing distro-supported modules that are installed by user request, sort of like the way that Nvidia drivers are provided.
Re: (Score:2)
I've been using ZFS on Linux for years, and the only thing that ever came close to being a mess was the horseshit called DKMS. Under CentOS6 (and I suspect any other linux), it absolutely insisted on building a mess of something it called "weak-modules" when I updated the kernel. These are nothing more than a mess of symlinks to the old module, and they kept breaking my system. I never found a workable way to prevent their creation,
Re: (Score:2)
I don't see why it couldn't be used for /, as long as the appropriate module is present in initrd (or initramfs, etc.) As for unattended/scripted, the options you put in the script are still choices. As I understand it, the one thing you can't do is compile ZFS directly in to the kernel to avoid the GPL/CDDL incompatibility.
ZFS, CentOS, and DKMS (Score:2)
I, for one, would absolutely love to have official ZFS modules in CentOS instead of relying on DKMS. DKMS on a CentOS box is a nightmare, because of the "weak-updates" shortcut that it likes to take (where it symlinks the modules from a previous kernel into /lib/modules/foo/weak-updates instead of, you know, actually rebuilding the modules). Sure, that's nice and fast, but it falls apart horribly once that old kernel gets deleted.
Re: (Score:2)
Bingo. Hear, hear. I learned to HATE dkms and its goddamned "weak updates" with a vengeance. After intensive searching, I never found a clue anywhere as to how to beat the asinine weak update predilection out of dkms.
Alas, I have no confidence whatever that Red Hat will ever see the light on this. Believe it or not, I am now seriously looking at shitcanning it for my servers in favor of Ubuntu.
A hackish workaround for DKMS on CentOS (Score:2)
I found something a while back that seems to work:
cp /bin/true /bin/true.bak /usr/sbin/weak-modules /usr/sbin/weak-modules.NONONONONO /bin/true /usr/sbin/weak-modules
mv
ln -s
RAM? (Score:2)
ZFS is seriously cool in many ways, but you pay for that with some pretty significant RAM requirements for a file system driver. If I remember correctly, you need about 8GB of RAM to really make use of ZFS. I think it's great that they're including it with the distribution, but it wouldn't make sense to have this as the default file system. At least not until the average system out there is running with 16GB of RAM.
Re: (Score:2)
> ZFS is seriously cool in many ways,
Indeed. Does anyone have an updated version of this ZFS vs BTRFS table cheat/sheat?
http://www.seedsofgenius.net/s... [seedsofgenius.net]
> but you pay for that with some pretty significant RAM requirements for a file system driver.
> If I remember correctly, you need about 8GB of RAM to really make use of ZFS.
Yeah ZFS could be considered "bloated" but you get so many SWEET benefits.
Personally, I'd recommend having at least 8 GB of RAM solely just for ZFS RAIDZ1/RAIDZ2, leaving the o
Re: (Score:2)
Which Server isn't running 16GB at least these days? Even a cheap dedicated server comes with it default. And ZFS doesn't actually require all that much RAM, it does better (read caching etc) with it and it requires it a lot for dedupe but a 'standard' file system can easily go with 1 or 2GB of RAM).
Re: (Score:3)
Do not, repeat DO NOT ENABLE DE-DUPE unless you have gargantuan amounts of RAM.
Rule of thumb is 5GB of RAM per 1TB of ZFS data: http://constantin.glez.de/blog... [constantin.glez.de]
If you ever enable dedupe on a pool, it's on forever. You can't actually turn off the extra RAM requirements since there *could* be de-duped blocks, and ZFS must check for those on every pool import. On a system with insufficient RAM, it's possible to end up with a pool that can take hours or days to import with no indication that it's actuall
Re: (Score:2, Interesting)
Also don't enable dedup if you have media with a nontrivial seek time. It's tolerable on flash (but you do lots of extra I/O on write) but the deduplication table (DDT) tends to develop a random layout with respect to device LBAs, and the DDT needs to be consulted on each write to a dataset/zvol with dedup enabled, and it also needs to be scanned *first* during scrubs and resilvers. DDTs with millions of entries can require hundreds of thousands of random I/Os, which means hundreds of thousands of se
Re: (Score:2)
Deduplication sounds cool and all, but if you don't have a heavy need for it (e.g. you're running 20 identical virtual machines with their files stored on ZFS), jus
Re: (Score:2)
If I remember correctly, you need about 8GB of RAM to really make use of ZFS.
No. ZFS has performance advantages with more RAM available to ARC cache, and file de-duplication is incredibly RAM intensive, but both are configurable / optional features.
You can still get all the wonderful features of a modern CoW filesystem with data integrity checks with even a tiny amount of RAM. The minimum ZFS needs is 1GB almost to itself to run well, and that should be achievable on even the most basic of systems (16GB of ECC RAM is like $100, and if you're not willing to spend even 1/3rd of that t
Re: (Score:2)
Lastly, as others have pointed out, where does one get a "server" with less then 16GB ram?
I have an old Sun Ultra 2 server still running. I think it's close to 20 years old now. It has 512 MB of RAM, runs ZFS on Solaris 10. It's doing just fine.
Re: (Score:2)
Unable to login from work so i posted anon..
I have an old SUN X4500 running solaris 11.3, 16GB ram and incredibly the system doesn't have issues with 3TB drives.
Too little, too late... (Score:2, Informative)
Re:Too little, too late... (Score:4, Insightful)
Re: (Score:2)
I used the wrong tool for the job, therefore it sucks.
Please explain what "tool" I should use to turn an old PC with a Nvidia video card into a file server underneath my desk at home?
From 1997 to 2010, I've used Linux and SAMBA. Every now and then, the installation got hosed. In the early days, it was compiling the kernel the wrong way. With Ubuntu, it was the video drive upgrade for the Nvidia card and happened so frequently that I had the installation steps memorized.
From 2010, I've used FreeNAS. A USB stick would go bad and hosed the installation from time
Re: (Score:2)
I used the wrong tool for the job, therefore it sucks.
Please explain what "tool" I should use to turn an old PC with a Nvidia video card into a file server underneath my desk at home?
From 1997 to 2010, I've used Linux and SAMBA. Every now and then, the installation got hosed. In the early days, it was compiling the kernel the wrong way. With Ubuntu, it was the video drive upgrade for the Nvidia card and happened so frequently that I had the installation steps memorized.
From 2010, I've used FreeNAS. A USB stick would go bad and hosed the installation from time to time. Formatting a new USB stick, installing FreeNAS and copying over the backup config file took five minutes.
Well, your original post said you didn't have a video card, so at that point you should've used Ubuntu Server or Debian or something else that doesn't require X.org. However, you used an old video card so you could get a GUI going. Fine. But after the second or third time a driver update broke it, you should've at that point began to decline the driver updates--you said it was a file server, right? You don't need the latest gfx drivers for that.
Re: (Score:2)
But after the second or third time a driver update broke it, you should've at that point began to decline the driver updates
I did. Ubuntu still saw that I had Nvidia video installed in the system and managed to hose the installation anyway. I don't know if this process got any better since 2010, but this was a PITA at the time I was using it.
you said it was a file server, right? You don't need the latest gfx drivers for that.
This was my only Linux box at the time. Recruiters told me I needed to Linux GUI experience on my resume. The problem with that was I prefer minimalist windows managers to open terminal and web browser windows, what recruiters really wanted was Red Hat GUI experience as specified on their re
Re: (Score:2)
Yeah, having to reinstall the entire system is such a major "improvement" over having to reinstall a video driver (which is automatized to begin with).
Redoing FreeNAS takes five minutes. Redoing Ubuntu and Samba took three hours (180 minutes). Think about that. Five minutes versus 180 minutes. Which one is faster?
Oh the places the cognitive dissonance of some peeps takes them when trashing linux.
Most people expect an automated driver update not to hose the entire operating system.
Re:Too little, too late... (Score:4, Informative)
Re: (Score:2)
Nouveau should be fine or even just the vesa driver.
I don't know if that was an option five to ten years ago.
I could say why do you even need a video card on a server, but I guess some folk prefer that to using ssh or a serial connection from a laptop
If the installation got hosed from the video update, SSH wasn't going to work. The only way to diagnose and reinstall Ubuntu was with the video plug. Keep in mind that this was an old PC underneath my desk at home.
Re: (Score:2)
You literally have no idea what you are talking about.
I think your reading comprehension skills are failing.
Linux isn't windows. SSH isn't tied to your GUI experience.
If the video update hoses the installation (i.e., it doesn't boot), SSH won't be running. As for Windows, I never had a video update that hoses the installation.
Hard links (Score:2)
Re: (Score:2, Informative)
Liar.
Show me this ZFS on windows.
Re: (Score:2)
Export a ZFS volume via iSCSI, mount it on a remote machine and install Windows on it. The ZFS machine can now snapshot and clone the Windows install at will!
Re: (Score:2)
I use both.
And for the reference, theres a REALLY REALLY good chance that you've had phone calls that have traversed one of the ubuntu machines that aren't for serious environments.
The fact that you make such a retarded statement shows how utterly clueless you are about Linux. You're one of those guys who thinks distros are different from each other ... Just because Linux distros can't decide on where they want to put a given set of files and they all must use their own package manager that does the same
Re: (Score:2)
Re: (Score:2)
Uber alles. Default in CentOs 7. Why ZFS? It's not like anyone uses woobuntu in serious environments.
Ubuntu is the biggest OS for enterprise cloud deployments in the world.
Re: (Score:2)
Where's the lie?
Does systemd not replace the system log with a binary file that is unusable by every application that reads or writes to the system log?
No.
Does systemd not break the system administration tool chain?
No.
Does systemd not consume and discard STDERR making troubleshooting and debugging a masochist's delight?
No
Up until 14.04LTS everything dumps into /var/log/syslog, a standard text file. Beginning with 16.04LTS and thanks to systemd that is replaced by a binary file that is virtually inaccessible by everything else.
Run rsyslogd.
Won't bother with your boring crap about eth0. All my network interfaces have real names, which works with udev just like it used to.
Re: (Score:2)
The NIC name change predates systemd by quite a bit and is unrelated - that change happened in udev IIRC.
Re: (Score:2)
Diff AC here.
When I weigh the evidence, I have to put far more trust in the anti-systemd comments presenting real technical arguments, and no trust at all into the pro-systemd comments filled with insults, vitriol, and sometimes even gibberish.
I'm not exactly sure where you're getting this from. The "real technical arguments" are all lies (you can use syslogd with systemd, you can configure systemd to not use binary logs at all, there's still STDERR). Saying so is not "insults, vitriol," or "gibberish." Maybe if you actually tried it, you'd see so.
N.B. I'm not "pro-systemd" by any measure, it's just another set of tools I use, no different from the classic Unix utilities or X.org and what not. It's much more similar to launchd used in OS X than