Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Business

Does Linux "Fail To Think Across Layers?" 521

John Siracusa writes a brief article at Ars Technica pointing out an exchange between Andrew Morton, a lead developer of the Linux kernel, and a ZFS developer. Morton accused ZFS of being a "rampant layering violation." Siracusa states that this attitude of refusing to think holistically ("across layers") is responsible for all of the current failings of Linux — desktop adoption, user-friendliness, consumer software, and gaming. ZFS is effective because it crosses the lines set by conventional wisdom. Siracusa ultimately believes that the ability to achieve such a break is more likely to emerge within an authoritative, top-down organization than from a grass-roots, fractious community such as Linux.
This discussion has been archived. No new comments can be posted.

Does Linux "Fail To Think Across Layers?"

Comments Filter:
  • Merit (Score:2, Informative)

    by ez76 ( 322080 ) <slashdot@@@e76...us> on Saturday May 05, 2007 @04:52PM (#19004463) Homepage
    There is some merit to what Siracusa is saying, at least on gaming and multimedia fronts.

    Windows was a hamstrung peformer for graphics until NT 4.0 saw rearchitecture [microsoft.com] which placed key portions of the OS (including 3rd-party graphics drivers) at a much lower level.
  • ZFS definition (Score:3, Informative)

    by icepick72 ( 834363 ) on Saturday May 05, 2007 @05:03PM (#19004551)
  • Well, no. (Score:5, Informative)

    by c0l0 ( 826165 ) * on Saturday May 05, 2007 @05:13PM (#19004617) Homepage
    Alternativ approaches to implementing subsystems of the Linux kernel are often developed concurrently, in parallel, and there's a system you can compare to darwinistic evolution that decides (in most cases) which one of a given set of workalikes makes it into the mainline tree in the end. That's why the Linux kernel itself incorporates, or tries to adhere to, a UNIX-like philosophy - make a large system consist of small interchangeable parts that work well together and do one task as close to perfect as possible.
    That's why there are so many generic solutions to crucial things - like "md", a subsystem providing RAID-levels for any given blockdevice, or lvm, providing volume management for any given blockdevice. Once those parts are in place, you can easily mingle their functions together - md works very nice on top of lvm, and even so vice versa, since all block devices you "treat" with one of lvm's or md's functions/features, again, result in a block device. You can format one of these blockdevices with a filesystem of choice (even ZFS would be perfectly possible, I suppose), and then incorporate this filesystem by mounting to whereever you happen to feel like it.
    There are other concepts deep down in there in the kernel's inner workings that closely resemble this pattern of adaptability, like, for example, the vfs-layer, which defines a set of reuqirements every file-system has to adhere and comply to. This ensures a minimal set of viable functionality for any given filesystem, makes sure those crucial parts of the code are well-tested and optimized (since everyone _has_ to use them), and also makes it easier to implement new ideas (or filesystems, in this sepcific case).

    Now, zfs provides at least two of those already existing and very well working facilites, namely md and lvm, completely on its own. That's what's called "code-duplication" (or rather "feature-duplication" - I suppose that's more appropriate here), and it's generally known as a bad thing.
    I do notice that zfs happens to be very well-engineered, but this somewhat monolithic architecture still bears the probability of failure: suppose there's a crucial flaw found somewhere deep down in this complex system zfs inevitably is - chances are you've got to overhaul all of its interconnecting parts massivley.

    Suppose there's a filesystem developed in the future that's even better than zfs, or at least better suited to given tasks or workloads - wouldn't it be a shame if it had to implement mirroring, striping and volume-management again on its own?

    Take an approach like md and lvm, and that's not even worth wasting a single thought on. The systems are already there, and they're working fantastically (I'm an avid user of md and lvm for years by now, and I frankly cannot imagine anything doing these jobs noticeably better). I'd say that this system of interchangeable functional equivalents, and the philosophy of "one tool doing one job" is absolutely ideal for a distributed development model like Linux'.

    It seems to be working since the early nineties. There must be something right about it, I suppose.
  • Re:What's ZFS? (Score:2, Informative)

    by Anonymous Coward on Saturday May 05, 2007 @05:15PM (#19004635)

    It has some really nice features that are either not in Linux filesystems or not well implemented in Linux filesystems. It's supported by Solaris, FreeBSD, OSX, and possibly some other operating systems, so it'd be handy if it also worked natively in Linux. It could be like FAT32 for people who need to share data between OSes and don't need Windows. Except unlike FAT, ZFS is actually well designed and has "modern" features.

  • Re:What's ZFS? (Score:5, Informative)

    by pedantic bore ( 740196 ) on Saturday May 05, 2007 @05:24PM (#19004705)
    I'll elaborate (slightly) about ZFS if someone else will tell me who John Siracusa is and why I should care what he writes... I couldn't figure that out from TFA.

    ZFS is a file system developed by Sun over the past several years. But the important thing is, in this context, that the ZFS design philosophy (never mind the actual design, which isn't what this discussion is about) differs from that of ordinary file system design. Most file systems make strong assumptions about reliability of the underlying block storage facility: there's some gizmo down there, whether it be a disk (for itsy-bitsy systems), a RAID set (for not so bitsy systems), or a SAN, that reliably stores and retrieves blocks with reasonable performance. ZFS doesn't do this. It manages many details of the storage layers -- it does RAID its own way (to get around problems that conventional RAID doesn't solve), and does volume management itself as well.

    From the point of view of a UNIX/Linux file system person, this seems very weird. However, these ideas are not really new or revolutionary (there are new things in ZFS, but this philosophy isn't one of them). It pretty much describes how network storage vendors (NetApp, EMC, etc) have been building things all along.

  • by Anonymous Coward on Saturday May 05, 2007 @05:28PM (#19004747)
    From Sun [sun.com]:

    "If you're willing to take on the entire software stack, there's a lot of innovation possible."

    Jeff Bonwick
    Distinguished Engineer
    Chief Architect of ZFS
    Sun Microsystems, Inc.
  • by diegocgteleline.es ( 653730 ) on Saturday May 05, 2007 @05:36PM (#19004851)
    you can't even run FC3 binaries on FC4

    You can run RHEL3 binaries in RHEL4 however. And you can happily run Linux 1.0 binaries on the latest linux development snapshot. Thats because Linux DOES have a stable ABI: The syscall interface. That's the REAL ABI the Linux kernel has to support, and it's the one that it's really guaranteed to be stable. What you think as an abi it's not an "abi", it's an INTERNAL ABI. Drivers are not "software built in top of the kernel", they're plugins. And Linux developers do not care about it because linux is open source, in the open source world you can change source easily and it gets usually merged into the kernel. Basically, the Linux kernel gets more benefit from a internal unstable ABI that gets changed when it's needed and that improves all the linux drivers, than getting a stable internal ABI that only benefits a couple of external OSS drivers and another couple of propietary, illegal drivers.

    Linux has no direction, no goals

    That's what happens when you give everybody freedom to modify your code; everybody extends Linux in unexpected directions, that happen to be the directions the people (profesional world) desires because it's the people (profesional world) who actually develops the features. For example, some people have made Linux scale in machines with way more CPUS [lkml.org] of what your beloved Solaris has ever run, and now other people are adding hard realtime support to the core Linux kernel, which happens to make Linux beat latency records [internetnews.com] on Wall Street servers. It all was unexpected; IT however seems to like it.
  • Re:Linux discipline (Score:3, Informative)

    by Elektroschock ( 659467 ) on Saturday May 05, 2007 @05:42PM (#19004907)
    "Pawel Jakub Dawidek has ported and committed ZFS to FreeBSD for inclusion in FreeBSD 7.0, due to be released in 2007" (wikipedia)
  • Re:Total bullshit (Score:5, Informative)

    by Jeff DeMaagd ( 2015 ) on Saturday May 05, 2007 @05:57PM (#19005063) Homepage Journal
    Do you have a copy of StarOffice from the mid-to-late 90's? Try running that in Linux now. Do you have a copy of MetroX from say, 1998? Try running that in Linux now. Are you still using the original Linux binaries for any games released in the late 90's?

    I'm still using a copy of AutoCAD released in 1995 for the Windows 3.1 Win32S API, and it works fine in Windows 2000 and Windows XP except for that it's got the old 8.3 filename limitation. I am still using WordPerfect Suite 8, the current version is 13, I think. I know someone that is still using Corel Draw 7, the current version is 13. All these programs still work fine in XP/2000, and I think that is a splendid record for binaries that were unpatched between Windows updates.

    The DirectX architecture has changed between the 9X and the NT lines, but otherwise, the legacy APIS are generally well-preserved and allows very complex software to work without a patch.
  • Re:Hey! (Score:2, Informative)

    by Goaway ( 82658 ) on Saturday May 05, 2007 @06:07PM (#19005175) Homepage
    You have no clue at all what this article is about.
  • Re:Hard to dis (Score:5, Informative)

    by init100 ( 915886 ) on Saturday May 05, 2007 @06:16PM (#19005291)

    It's a nice free tech toy, sure, but when it comes to being an accepted and realistic product, there are a great many reasons to look elsewhere.

    You're right, that's why nobody is using Linux for real systems [top500.org].</sarcasm>

  • by lokedhs ( 672255 ) on Saturday May 05, 2007 @06:18PM (#19005307)

    ZFS seems to want to take all over the disk subsystem. Why? Is there a reason why it needs its own snapshot capabilities, instead of just using LVM?
    Because there are many things your storage system can do if it has knowledge of the entire stack.

    The problem with a "traditional" layered model is that the file system has to assume that the underlying storage device is a single consistent unit of storage, where a single write either succeeds, or it fails (in which case the data you wrote may or may not have been written). This all sounds very good and file systems like ext2 are written based on this assumtion.

    However, if the underlying storage system is RAID5, and there is a power loss during the write, the entire stripe can become corrupt (read the Wikipedia article [wikipedia.org] on the subject for more information). The file system can't solve this problem because it has no knowledge about the underlying storage stucture.

    ZFS solves this problem in two ways, both of which reuires the storage model to be part of the filesystem:

    1. Each physical write never overwrites "live" data on the disk. It writes the stripe to a new location, and once it's been completely committed to disk the old data is marked as free.
    2. ZFS uses variable stripe width, so that it does not have to write larger stripes than nescessary. In other words, a large write can be directly translated to a write to a large stripe on the sotrage system, and a smaller write can use a smaller stripe width. This can improve performance since it can reduce the amount of data written.
    There are plenty of other areas where this integration is needed, including snapshotting, but I hope the above explanation explains that the layered model is not always good.
  • Re:Hey! (Score:5, Informative)

    by bertok ( 226922 ) on Saturday May 05, 2007 @08:29PM (#19006445)

    I think you'll find that it is you that doesn't understand what a snapshot could be. Take a look at ZFS, try it, and see if you think of snapshots the same way again. In ZFS, a snapshot can be promoted to a clone, which is a writeable copy of the original filesystem, sharing unmodified blocks using a copy-on-write algorithm.

    This is increadibly powerful and useful. For example, a single master 'image' volume can have customizations added for specific purposes. This is useful in desktop deployment, iSCSI or NFS network boot, etc...

    Would you expect a 'first class' writeable clone to have a name like 'dev/mapper/snapshotted-hda' or 'dev/hda.1'? Which one makes more sense? Why would the original have a special name, when the clone is identical?

    It's this kind of narrow 'snapshots are throwaway' thinking that causes artifical limitations in APIs and operating system design that serve no real purpose.

  • Re:Merit (Score:2, Informative)

    by Thomas the Doubter ( 1016806 ) on Saturday May 05, 2007 @08:45PM (#19006537)
    Yea, and the NT 4.0 rearchitecture very-much compromised the integrity of the NT kernel. Something we all pay for every day.
  • Re:Hey! (Score:3, Informative)

    by DaleGlass ( 1068434 ) on Saturday May 05, 2007 @08:51PM (#19006567) Homepage

    In ZFS, a snapshot can be promoted to a clone, which is a writeable copy of the original filesystem, sharing unmodified blocks using a copy-on-write algorithm.

    LVM has this already. CONFIG_DM_SNAPSHOT in the kernel config.

    Would you expect a 'first class' writeable clone to have a name like 'dev/mapper/snapshotted-hda' or 'dev/hda.1'? Which one makes more sense?


    If you use LVM, then all devices you put a filesystem on are in /dev/mapper. My root is in /dev/data/root, /home is in /dev/data/home (or /dev/mapper/data-home, same thing), a snapshot of that would also be in /dev/mapper, with whatever name I choose for it. If you use LVM, /dev/hda isn't directly usable, as it's a LVM physical volume. The writable device is in /dev/mapper.

    Why would the original have a special name, when the clone is identical?


    But they aren't identical. LVM works with block devices, it doesn't know about the filesystem. If you do a bit-by-bit comparison of the original device with its snapshot, if the original changed, then there will be differences. The snapshot contains the data it would if you unmounted the FS and make a copy of the device.
  • by taxman2007 ( 1087327 ) on Saturday May 05, 2007 @08:54PM (#19006581)
    First, and most importantly, Siracusa never states or even suggest "this attitude" is "responsible for all of the current failings of Linux ".

    The direct quote is "I've long seen the Linux community's inability to design, plan, and act in a holistic manner as its greatest weakness."

    You can see the meaning has been completely changed in the summary from one of positive criticism to one of arrogant condemnation.

    Through this change, we can see the posters true feelings, feelings that are shared by many in the Linux community. That is to respond immaturely and get all bent out of shape if somebody builds anything that doesn't follow the "Linux philosophy".

    The Truth. Both Linux in general, and ZFS are amazing, and powerful tool. One of best philosophy I've encountered is "use the right tool for the job".

    Nobody is forcing Linux devs to port ZFS, or even use, or even think about it. The only reason this is an issue, is because many in the Linux community realize how powerful ZFS is, and they're subconsciously pissed off that they can't have it. So they respond like a 3rd grade bully by attacking it in a self defeating attempt to minimize its importance.

  • Re:Hey! (Score:4, Informative)

    by Daniel Phillips ( 238627 ) on Saturday May 05, 2007 @08:58PM (#19006607)
    You don't seem to understand snapshots

    If you say so :-)

    A snapshot works by creating a copy of the device, with the contents it had when the snapshot was created. If you make a snapshot of /dev/hda at 12:15, then you'll get /dev/mapper/snapshotted-hda as it was at 12:15, while /dev/hda will continue being possible to modify... Why would you change anything over?

    Because with the incumbent volume management strategy you may not continue to use /dev/hda directly when it is snapshotted. You must access /dev/hda through some other device and that some other device must located in the /dev/mapper directory. No wonder you apparently mixed up what is a snapshot and what is being snapshotted - the way we currently do this in Linux is quite unnatural and is a wide open invitation to such confusion, not to mention a pointless makework project for system administrators.
  • Re:Hard to dis (Score:3, Informative)

    by TheNetAvenger ( 624455 ) on Saturday May 05, 2007 @09:19PM (#19006741)
    I would expect this shoddy driver support out of ATI, since they have always been pretty disappointing. But nVidia is a true disappointment, since their driver support had always been top-notch until now.


    As a PS...

    For Vista - NVidia and ATI had to write the entire driver from scratch. From GPU Scheduling, RAM Virtualization to tons of other Vista features of the WDDM, make the leap quite significant.

    However the thing people don't see to understand, even if you have Video card that has a crap driver available for it, just install the XP driver. You lose the WDDM and Aero concepts, but Vista works just like XP and will give you back the same quality and experience for Video.

    So all the people whining about not moving to Vista because of the video driver problems are really not too bright. They can be running the same XP driver on Vista that they are using now, but have the other features of Vista. Then when NVidia and ATI get all the bugs out of the Vista driver version, move up to the cooler new driver features Vista offers in the WDDM Video subsystem.
  • by rapidweather ( 567364 ) on Saturday May 05, 2007 @09:52PM (#19006929) Homepage
    1. Fonts, they are simply not as good as Windows.

    Of course I don't agree.
    I'm doing a long term comparison test between Fedora Core 6, and my Knoppix remaster, [geocities.com], both installed on the same machine, a HP Pavilion 8250, maxed out on memory, and with a dual hard drive setup, one 2 GB for MSDOS to run my loadlin menus, and for GRUB in the MBR, and the main hard drive, a 160 GB for both linux installations to use.
    My Knoppix remaster, Rapidweather Remaster of Knoppix Linux runs from a "tohd" partition, with a really big "persistent home" partition, and a common swap. So, even though I have a nice "logo16" splash screen with a bright yellow boot prompt, I don't get to see it on a daily basis with the "loadlin" setup, only if I decide to run off the CD for some special purpose.


    I have all of the fonts that I could possibly get from the Debian package servers, and I delight in showing off how well Firefox, for instance, displays web pages, compared to Windows XP (another box, with P4 HT and 128 MB ATI). The Fedora Core 6 installation does not quite measure up to either Rapidweather Remaster or Windows XP when it comes to the "font comparison".
    I realized early on that I would need the fonts, no one is going to "get used to" poor fonts, once they see something better. The original Knoppix I started with, and the latest ones I have reviewed, do have what I would call "minimal" fonts, I would not be satisfied with.

    Rapidweather

  • Re:Total bullshit (Score:2, Informative)

    by Shulai ( 34423 ) on Saturday May 05, 2007 @10:08PM (#19007015) Homepage
    Or just pick the required shared libraries and put in any place where the linker can find them. That is what distros sometimes do. And this is what Windows does too, some weird compatibility hacks aside.
  • by udippel ( 562132 ) on Saturday May 05, 2007 @11:05PM (#19007283)

    <OT>
    As an older slashdotter, I am quite disappointed with the discussion so far. A few have suggested to discuss the topic in question, respectively ZFS. But, as so often, we can make out that people just blindly speak without having read neither the original article, nor about ZFS.

    </OT>
    ZFS solves about all and any problems we have had with filesystems since FAT, and this same community was pretty enthusiast in http://developers.slashdot.org/article.pl?sid=05/1 1/16/2036242 [slashdot.org].

    Most of all, to me, I am astonished that almost everyone talks 'virtualisation', VM, QEMU, Xen.
    When it comes to filesystems, suddenly many seem to want to do everything on their own, on physical platters: partition, volumes/RAID, format. ZFS is a virtual filesystem, where none of such is physically needed. There is a nice http://www.opensolaris.org/os/community/zfs/demos/ basics/ [opensolaris.org] demo on how to create 100 mirrored filesystem within 5 minutes.

    Of course, filesystem should be a black box, an object, instead of the user having to do low-level work. ZFS provides this, and more relevant: of course it needs to be cross-layered therefore.

    Snapshots ought to be available easily, at any moment in time, without taking much space. ZFS does so, by only storing the changes and sharing the unmodified data. If you want to do so, you need an abstraction of the hardware. That is, crossing layers. Not to mention writeable snapshots.

    Adding new drives without partitioning, slicing, formatting. Just adding to the existing pool. Inclusive striping being adapted automagically. This needs a cross-layer interface, right ?

    The transactional filesystem guarantees uncorrupted data at power failures and OS crashes. If you do this across a pool of physical platters, you need operations across layers.

    There is an interesting blog on the usage of ZFS for home users. It contains some good arguments, why ZFS is useful for Linux' Desktop Stride. You find it here: http://uadmin.blogspot.com/2006/05/why-zfs-for-hom e.html [blogspot.com]

    Last ot least, the online checking of all your data ('scrubbing' and 'resilvering') is a valuable feature for Linux (and the home user) as well.

    To me it looks like, as of today, that about everyone liked the features of ZFS. Now, as it requires to break some old habits, suddenly we resist change and rather stick to older concepts.
    As if GPLv2 vs GPLv3 was not enough of a threat to Linux, now we unashamedly permit a new-from-the-bottom-up filesystem to overtake us as well ?

  • Re:Merit (Score:3, Informative)

    by Daniel Phillips ( 238627 ) on Sunday May 06, 2007 @12:19AM (#19007659)
    just in terms of video based on where the video drivers run, performance will always be a contention when compared to OSes that structured so that the main video drivers are kernel or user/kernel mode hybrids as in Vista

    Nonsense. Modern video hardware is predominantly driven by DMA, which requires an insignifcant number of kernel calls after initial setup. The rest of your points are just as empty and/or misinformed as your first, not worth a response.
  • Re:Merit (Score:4, Informative)

    by einhverfr ( 238914 ) <chris...travers@@@gmail...com> on Sunday May 06, 2007 @12:42AM (#19007793) Homepage Journal
    Technically, the subsystems in NT are user-mode processes, though they are (to my knowledge) the only user-mode processes that cause blue screens when they crash. To my knowledge, the only layers in the NT Kernel are between the executor and the drivers.

    Think of subsystems as being like shells with system-specific behavior. For example, filenames are case-sensitive in the POSIX subsystem but not in the Win32 subsystem.

    Honestly, I think that WIndows has the *wrong* layers. The subsystem layer was intended to allow for compatibility with software written for other operating systems but to my knowledge only the Win32 subsystem has ever been consistantly maintained (the POSIX subsystem is maintained at the moment, but only *after* Microsoft bought OpenNT). Windows doesn't need this functionality, but they really need nice VFS and inode layers in their filesystem.

    Finally, the grandparent's post about NT4 being a credible gaming platform is just laughable. I don't even know where to start. It seems to me that it is more likely to have been made to get additional performance out of CAD/CAM applications which also use 3d acceleration. So you are write about the GP poster not knowing what he writes about.
  • Re:spit and polish (Score:3, Informative)

    by drsmithy ( 35869 ) <drsmithy@nOSPAm.gmail.com> on Sunday May 06, 2007 @03:37AM (#19008495)

    Most recently, it has been the package management. I have been all but forced to use the "commercial" RedHat up at work, and I still cannot believe that Redhat uses a lame package manager that requires you to "solve your own" dependencies.

    They don't. Up2date resolves dependencies.

    Redhat is another problem. rpm doesn't have the smarts to do anything for you. If you want any kind of 'immediate' commands, you have to 'yum' them. This isn't acceptable in a corporate environment.

    Well, sure - but that's because the whole "dependency hell" thing Linux has developed isn't really acceptable in a corporate environment, not because of anything specific to yum. It's not like Debian is meaningfully different in that regard.

    yum is a bastard that is excluded from RedHat so they can maximize acceptable up2date profits.

    Yum isn't "excluded" from Red Hat (indeed, RHEL5 has replaced up2date with yum).

    I could really care less if RedHat goes out business or not.

    Considering how much kernel development they fund and how important their product is to adoption of Linux in the enterprise, you probably should.

    Debian is at least 1 full generation ahead of RedHat. Redhat Enterprise is still redhat 9 with updates.

    Just like the current version of Debian is the previous version with updates, you mean ?

  • Re:Merit (Score:3, Informative)

    by Znork ( 31774 ) on Sunday May 06, 2007 @05:21AM (#19008867)
    Apparently, you haven't even looked at any performance comparisons. Linux easily keeps up with, and sometimes even exceeds, performance, even under API replication solutions such as Cedega or Wine.

    Google for linux windows games performance.
  • by IpSo_ ( 21711 ) on Sunday May 06, 2007 @01:15PM (#19011349) Homepage Journal
    Its not dead at all. I'm not even sure how much coding of Reiser4 Hans did himself, but Reiser4 is still actively developed by pretty much the same people it was before Hans was arrested.

    There was talk just last week again about taking another crack at getting it included in the Kernel.

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...