Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Oracle Sun Microsystems Linux

Native ZFS Is Coming To Linux Next Month 273

An anonymous reader writes "Phoronix is reporting that an Indian technology company has been porting the ZFS filesystem to Linux and will be releasing it next month as a native kernel module without a dependence on FUSE. 'In terms of how native ZFS for Linux is being handled by this Indian company, they are releasing their ported ZFS code under the Common Development & Distribution License and will not be attempting to go for mainline integration. Instead, this company will just be releasing their CDDL source-code as a build-able kernel module for users and ensuring it does not use any GPL-only symbols where there would be license conflicts. KQ Infotech also seems confident that Oracle will not attempt to take any legal action against them for this work.'"
This discussion has been archived. No new comments can be posted.

Native ZFS Is Coming To Linux Next Month

Comments Filter:
  • by Anonymous Coward on Friday August 27, 2010 @09:08PM (#33399890)

    I hope this idea [] also works with the port (and will be working with btrfs one day!?):

    Transparently adding an SSD into the slow-HD-to-fast-CPU-register cache hierarchy. Are there ways to make something like this work with current FSes?

    Mount a nice fast SSD as swap would be a good start. Then point your /tmp to a tmpfs.

    My swap is practically always empty, so that would not help much. And how much stuff gets written into /tmp? Mine is 839kiB right now, that is neglible and probably in RAM cache already. What I want cached are all the binaries and libs linked from there. Then starting any app for the second time would be near-instant and this cache would survive reboots and be one or two orders of magnitude larger than RAM.

  • by larry bagina ( 561269 ) on Friday August 27, 2010 @09:11PM (#33399914) Journal

    There's a despair poster, I believe, with a caption along the lines "it could be, your main purpose in life, is to provide a warning to others". (Damn it, the internet made me check [] ... "It could be that the purpose of your life is only to serve as a warning to others."

    ZFS's purpose was not to be a next generation file system, but to encourage next generation file systems to be built. Free Software has a tendency to get stuck at "good enough" sometimes. And someone has to come along and show that there is a better way. Competition is good. Sometimes it's internal (gcc vs egcs), sometimes it commercial (CVS vs perforce and bitkeeper).

    What if ZFS was GPL? What if it went into Linux? It might get incremental tweaks, but it would stagnate at "good enough". Instead, btrfs, hammer, etc were developed -- much better, much cleaner file systems.

    ZFS has some cute tricks. What could be better than taking a sledgehammer to a disk drive without causing problems? But ultimately, ZFS would hold linux back.

  • by Anonymous Coward on Friday August 27, 2010 @09:19PM (#33399958)

    This sounds great.

    However, some of the authors comments in the Phoronix thread (such as "FUSE is crap") -- and the fact that he's announcing this on Phoronix instead of via some other technical channel temper my enthusiasm just a bit...

  • by Daffy Duck ( 17350 ) on Friday August 27, 2010 @10:09PM (#33400210) Homepage [] []

    Long story short: disk pools in ZFS can only grow, so don't make any mistakes unless you can afford to do a full dump and restore. Sun had been "working on" this for years. Anyone heard any news lately?

  • Re:Good Article (Score:3, Interesting)

    by mysidia ( 191772 ) on Friday August 27, 2010 @10:17PM (#33400258)

    It's worth mentioning that the latest version of Windows Server (2008 R2) is 64-bit only as well.

    And ZFS has always had 64-bit as minimum system requirements for production systems, even on Solaris.

    That is, 32-bit is considered okay for limited testing, unsuitable for production use, particularly for use with zpools larger than a few hundred GB in size or so.

    If you have a 1TB or larger storage pool with ZFS, you need 2gb of RAM and a 64-bit CPU to have something acceptable and stable. This is true whether you used Solaris or BSD.

    I consider it a good thing that the person porting to Linux is actually enforcing the basic 64-bit requirement. Maybe fewer people who don't read docs and 'system requirements' sheets will get burned that way, by not noticing that "32-bit is not suitable for enterprise use", and say ZFS on Linux 'sucks', because they screwed up basic configuration and deployment requirements ?

  • by CoolGopher ( 142933 ) on Friday August 27, 2010 @10:21PM (#33400276)
    I came across this job post [] at Valve just recently. To save you from having to follow the link, it includes the item
    "Port Windows-based games to the Linux platform".

    Just sayin'.
  • by Xtifr ( 1323 ) on Friday August 27, 2010 @10:22PM (#33400284) Homepage

    I know you probably can't redistribute the kernel with the CDDL bits but you can redistribute them separately

    Not necessarily. I know programmers like to interpret the law as if it were a computer program--rigidly, and with no room for interpretation--but it doesn't necessarily work that way. The law will consider your intent, and if your intent is to deliver a derivative work, then forcing the customer to obtain the pieces and glue them together won't necessarily let you off the hook.

    This came up most famously with Objective C. Originally, Steve Job's NeXT made a proprietary front-end to GCC, and they made their customers get the rest of GCC for themselves. When the FSF objected, Steve consulted his lawyers, and decided to release the Objective-C front end source, which is why GCC includes Objective-C today.

    Of course, the laws may be different in India, but I wouldn't touch this module without consulting with my lawyers first, since I'm in the US.

  • by Cyberax ( 705495 ) on Saturday August 28, 2010 @12:38AM (#33400916)

    "* *Actual* performance problems due to fragmentation - outside of a few corner cases - are basically nonexistant. "

    Yep. That's why I have to run defragmenter on our build server every week...

    Also, Windows is notoriously slow with file operations. It's not directly related to NTFS, but more to extremely inefficient VFS stack.

    "* Can you explain what you mean by "it's done above the VFS layer" ? Surely you're not trying to argue symlinks and shortcuts are the same thing ? " []

    "* RAID is handled at the block device level, not the filesystem level (and many, many people believe putting RAID into the "filesystem" is an architecturally bad thing, so that's hardly something it can be plainly criticised for)."

    However, filesystem-level RAIDs have a lot more functionality than block-level RAIDs. Look at ZFS or BTRFS.

    "* Do you have a source for up-to-date benchmarks ?"

    I have my own set of benchmarks. Well, NTFS on Windows is almost always slower (and quite often like 100 _times_ slower) than Linux filesystems. [] - this is the source. [] - this is a post with benchmark results (in Russian, sorry - I can translate if you have any questions) [] - this post contains this benchmark, slightly adapted.

    I regularly re-run these tests. So far, Windows is only getting slower compared to Linux.

    I've recently created a multithreaded version of this test. Well, let's say that NTFS sucks so badly, that it's hard to understand how MS has managed to achieve this.

  • Re:wtf? (Score:3, Interesting)

    by Renegade88 ( 874837 ) on Saturday August 28, 2010 @01:58AM (#33401250)
    Obviously other people don't share your high regard for the current state of BtrFS in relation to ZFS. I wonder why that is?
  • by Anonymous Coward on Saturday August 28, 2010 @07:57AM (#33402270)

    Does anybody know, how former Sun strategists think today of their license game?


Each new user of a new system uncovers a new class of bugs. -- Kernighan