Forgot your password?
Linux Business

Does Linux "Fail To Think Across Layers?" 521

Posted by kdawson
from the one-advantage-of-the-cathedral dept.
John Siracusa writes a brief article at Ars Technica pointing out an exchange between Andrew Morton, a lead developer of the Linux kernel, and a ZFS developer. Morton accused ZFS of being a "rampant layering violation." Siracusa states that this attitude of refusing to think holistically ("across layers") is responsible for all of the current failings of Linux — desktop adoption, user-friendliness, consumer software, and gaming. ZFS is effective because it crosses the lines set by conventional wisdom. Siracusa ultimately believes that the ability to achieve such a break is more likely to emerge within an authoritative, top-down organization than from a grass-roots, fractious community such as Linux.
This discussion has been archived. No new comments can be posted.

Does Linux "Fail To Think Across Layers?"

Comments Filter:
  • Merit (Score:2, Informative)

    by ez76 (322080)
    There is some merit to what Siracusa is saying, at least on gaming and multimedia fronts.

    Windows was a hamstrung peformer for graphics until NT 4.0 saw rearchitecture [] which placed key portions of the OS (including 3rd-party graphics drivers) at a much lower level.
    • Re: (Score:3, Insightful)

      by pionzypher (886253)
      Agreed, but is that not also its achilles heel? Kernel space drivers have the ability of taking down the whole system where userland drivers do not.
      • Re:Merit (Score:5, Insightful)

        by FooAtWFU (699187) on Saturday May 05, 2007 @05:33PM (#19004801) Homepage
        Indeed. Whatever downsides layers have, they keep things sane. If you're going to make a mess of things, at least with layers you have an organized mess. There's a reason that Linux is more secure than Windows.
    • by 26199 (577806) *

      ...and then Vista moved them back?

  • by catbutt (469582) on Saturday May 05, 2007 @04:52PM (#19004465)
    This is like comparing a monarchy with anarchy, without acknowledging that there are in-between solutions that have advantages of their own. Democracy (and representative democracy) being one example.

    Not saying the linux development community should be a democracy with everything voted on or whatnot, just saying that there may be creative approaches that have yet to be explored. You'd think smart people with a penchant for game theory would be working on it.

    Food for thought.
    • Democracy Sucks. (Score:3, Interesting)

      by LibertineR (591918)
      Which is why America is a Representative Republic and NOT a Democracy.

      With Democracies, you end up with the tyranny of the majority, regardless of whether the minority opinion is the correct one. Under a Republic form, a large enough minority can plug up the works and force negotiation with the majority before a final solution is agreed upon.

      The Linux Development community needs representative decision making, there are too many voters, hence, almost no direction or real progress towards a cohesive goal

      • The Linux Development community needs representative decision making, there are too many voters, hence, almost no direction or real progress towards a cohesive goal.

        Your argument assumes that the "Linux Development community" (whatever that is) has, needs, or wants a common goal. There has been project after project which were supposedly to unify the "linux community" or "open source community," but historically every single one has fallen apart when it became obvious that the majority of people that the

      • Re: (Score:2, Interesting)

        by poopdeville (841677)
        Which is why America is a Representative Republic and NOT a Democracy.
        With Democracies, you end up with the tyranny of the majority, regardless of whether the minority opinion is the correct one.

        Yes, a tyranny of the minority is clearly better.

        Hint: The only correct opinion regarding the state is the will of its subjects.
        • by catbutt (469582)

          The only correct opinion regarding the state is the will of its subjects.

          But measuring that will is harder than you may think. For instance, when there are more than 3 options, plurality voting (i.e. select the one that gets the most votes) is completely broken, as it unfairly rewards the choice that is the most different from other choices (that is, it is subject to vote splitting).

          And as what the previous poster called "tyranny of the majority", typical voting will weigh the votes of everyone equally, which doesn't work well for things where, for instance, a slight minority

      • by Cyberax (705495) on Saturday May 05, 2007 @05:35PM (#19004829)
        Yes, and dollar is not a currency, it's a banknote.

        Representative republic is JUST A FORM OF DEMOCRATIC GOVERNMENT.
        • Representative republic is JUST A FORM OF DEMOCRATIC GOVERNMENT.

          There's a lot of overlap there, but a republic can include a number of checks against the will of the people, while a true democracy doesn't pretty much by definition.

        • by umeboshi (196301)

          Representative republic is JUST A FORM OF DEMOCRATIC GOVERNMENT.
          Not really, the representation could be divvied up between the 100 most influential families. The progeny of these families would inherit the Senate, each family representing their own geographic region that they control.
          Republics have their origins in fascism, and served as a tool to help unify local rulers into larger, more cohesive nation.
      • by cyber-vandal (148830) on Saturday May 05, 2007 @05:50PM (#19004999) Homepage
        Socialism worked pretty well in the democratic western countries, that's why people aren't dying of cholera/typhoid/starvation in slums anymore.
      • by catbutt (469582)
        America's form of democracy is certainly representative, which is probably the only practical solution when you've got more than a dozen or so people. I'd argue it is still a democracy, if an imperfect one.

        If (in my little fantasy world) the constitution had been written with the input from modern day game theorists and election theorists, I'd think it could be massively improved. For example, our destructive two party system is a simple (and unnecessary) by product of plurality voting. (example: http://k []
      • Re: (Score:3, Insightful)

        by Kjella (173770)
        Which is why America is a Representative Republic and NOT a Democracy.
        With Democracies, you end up with the tyranny of the majority, regardless of whether the minority opinion is the correct one. Under a Republic form, a large enough minority can plug up the works and force negotiation with the majority before a final solution is agreed upon.

        Says the only two-party state I know of. Whichever party has 52% this term screws over the other 48% without flinching. If you wanted negotiation, you should look to Eu
    • by What Is Dot (792062) on Saturday May 05, 2007 @05:18PM (#19004655)
      I totally agree. I think the main problem with Linux based systems (Fedora, Ubuntu, etc.) is that there are so many of them. Diversity is wonderful for free speech, but in the open source community, we have 100 solutions for every 1 problem.
      The best solution would be for the Linux Kernel project to say, "Open source developers can do as they please, but we here at the Kernel project encourage developers to contribute to THESE specific projects: Gnome, Open Office, etc...
      The open source community is massive, but development will take an eternity until a majority of the community starts to support ONE software solution over it's alternatives.
      • The best solution would be for the Linux Kernel project to say, "Open source developers can do as they please, but we here at the Kernel project encourage developers to contribute to THESE specific projects: Gnome, Open Office, etc...

        That is not going to happen, but if it did it would not include Gnome [].
      • Re: (Score:3, Insightful)

        by flyingfsck (986395)
        Many people complain about there being 'too many Linux distributions'. The fact is, that there is only one GNU/Linux. The distributions merely provide a tool kit that goes along with it, the underlying systems are all the same.
      • Re: (Score:3, Insightful)

        by SQLz (564901)
        That thing that just flew over your head, that was the entire point of open source and the GPL, you just missed it. We want 100 different solutions to every problem, not forever mind you, but all the code is GPL and belongs to the community. Say out of those 100 different solutions, 5 projects have awesome code and the rest are not great, incomplete, etc. Its a win win either way. First the other 95 people learned something, and at some point, the things they write wont' be crap, plus they gave back to t
        • Sorry, but I can't agree with your reasoning. To explain, let met set out a few realities of software development, as I've personally come to see them after some years as a developer:

          1. Code reuse is mostly a myth. One significant exception is libraries/components — code specifically designed to be modular, generic, self-contained and open to reuse, and typically on a relatively large scale. Apart from that, it is much more common to reuse design concepts, interface standards and the like, rather than
      • Re: (Score:3, Insightful)

        by Phleg (523632)
        What the fuck? Why should the kernel developers be responsible for picking which desktop environment, mail client, and office suite I use? You might as well have architects tell you what kind of company you can put in their building.
  • Linux discipline (Score:5, Interesting)

    by stevelinton (4044) <> on Saturday May 05, 2007 @04:58PM (#19004527) Homepage
    Personally, I think the Linux kernel manages these issues quite well, if (by conventional standards) rather inefficiently.

    The practice, as I see it is: "The current rules (layering, etc.) are enforced rigourously (at least in Linus' tree) but radical rewrites
    of the rules take place relatively often"

    So if ZFS really does achieve wonderful things by violating the current layering it WON'T be accepted for Linux's kernel, but, if Linus can be convinced (via an appropriate chain of lieutenants, usually) that the layering is really an obstacle to achieving these things, we might see a completely new layering appear in 2.6.25 or somewhere, into which ZFS can fit. The inefficiency
    comes from the number of substantial pieces of work that get dropped because they don't fit in, or were misconceived. A more economically rational system would try to kill them sooner. Also, inefficiency arises from the fact that changing the filesystem layering would require every existing filesystem to be rewritten. Linux is notoriously unfazed by this, but in a commercial world, I suspect this would be too hard to swallow and you'd end up with all your filesystems fitting into the model except one, from whence come bugs and code cruft.
    • Re: (Score:3, Informative)

      by Elektroschock (659467)
      "Pawel Jakub Dawidek has ported and committed ZFS to FreeBSD for inclusion in FreeBSD 7.0, due to be released in 2007" (wikipedia)
    • by EmbeddedJanitor (597831) on Saturday May 05, 2007 @08:32PM (#19006465)
      The VFS interface is probably the most complex part of Linux and is not stable (that is, it changes from release to release).

      I maintain a Linux file system which is typically used across various kernel versions, including 2.4.x. Yes folk's 2.4.x is still used to ship new products. The changing interface makes for Fun-And-Games.

      The VFS to file system is not particularly clean as you need to do pretty ugly things like increment page counts etc within the file system. Much of this is done to enhance performance, but could probably have been done better (ie. preserving a clean interface without real performance compromises).

    • by try_anything (880404) on Saturday May 05, 2007 @09:14PM (#19006699)
      I like your title, and I like Linux's approach. The usual approach, especially in commercial software, is to give the users everything they want, no matter how many technical assets need to be sacrificed. Throw away good design, security, and reliability just to get the latest bullet point. That isn't what the Linux community wants. They want things done right, even if it takes a few years to figure out how. Windows is a good example of saying "yes" by default to every demand and ending up with a system so complex and fragile that even gurus don't understand it. (I'm thinking of the recent Raymond Chen blog entry that hit Slashdot, in which he admitted to releasing a bogus Windows patch because he didn't understand how Windows processes shut down.)
  • Total bullshit (Score:5, Interesting)

    by Werrismys (764601) on Saturday May 05, 2007 @04:59PM (#19004531)
    Linux will "support gaming" once games are supported for Linux. Linux has OpenGL, OpenAL, all the illusionary walls are market-made. Linux is a platform to build on without the fear of being obsolete in 2 years. DOS games nowadays run on DosBox, as do early Windows games. Even XP needs tweaks to run Win9X games. How is targeting a moving sucky platform preferable to one that is open? Easy. Games sell for 6 months tops. You get the initial sales, you get the money. After that it's tough shit if it won't work after next Windows Update(tm). I have used Linux since 1994, but work in the IT industry. I am constantly amazed by the amount of BULLSHIT the windows folks put up with. For weird quirks "shit happens" is the most common reply.
    • by Toby_Tyke (797359)
      . How is targeting a moving sucky platform preferable to one that is open?

      The moving sucky one has ninety plus percent of the home desktop market. Linux has less than one percent, and I've never seen any credible figures suggesting otherwise. Why target a tiny niche market when you can target a huge one?

      And bear in mind, the proportion of linux users who are serious about gaming and do not have access to a windows machine is probably one percent of Linux users. So even if you target windows, ninety ni
    • Re: (Score:2, Interesting)

      by etymxris (121288)
      Tribes 2 didn't fair well through the changes to threading in libc. Exporting the kernel version as 2.4 seemed to work at one point, IIRC. But last I tried I couldn't get it working at all. It's not true to say that a binary blob (which most games are) will work perfectly through changes to the underlying OS.
    • by sloanster (213766)
      The OP is spot on - I've been playing 3D FPS for years, as a linux user, and in my experience linux handles gaming nicely. The good performance of the native linux games is ample evidence of that.

      The quake 3 arena I bought in 1999 still runs like a champ on my current linux desktop running SuSE 10.2. Other native linux games that run nicely are doom3, quake 4, ut2004, RtCW. ET, etc.

      The "barriers" to linmux gaming are not technical at all, they are political, if they exist at all.
      • Re: (Score:3, Insightful)

        by Sj0 (472011)
        One thing I noticed from your post is that the Windows versions of all those games still run too. I wonder how much of the problem is changing versions of Windows, and how much is just hackish code some developers write?
    • Re:Total bullshit (Score:5, Informative)

      by Jeff DeMaagd (2015) on Saturday May 05, 2007 @05:57PM (#19005063) Homepage Journal
      Do you have a copy of StarOffice from the mid-to-late 90's? Try running that in Linux now. Do you have a copy of MetroX from say, 1998? Try running that in Linux now. Are you still using the original Linux binaries for any games released in the late 90's?

      I'm still using a copy of AutoCAD released in 1995 for the Windows 3.1 Win32S API, and it works fine in Windows 2000 and Windows XP except for that it's got the old 8.3 filename limitation. I am still using WordPerfect Suite 8, the current version is 13, I think. I know someone that is still using Corel Draw 7, the current version is 13. All these programs still work fine in XP/2000, and I think that is a splendid record for binaries that were unpatched between Windows updates.

      The DirectX architecture has changed between the 9X and the NT lines, but otherwise, the legacy APIS are generally well-preserved and allows very complex software to work without a patch.
    • Re: (Score:3, Insightful)

      by Kjella (173770)
      Linux will "support gaming" once games are supported for Linux. Linux has OpenGL, OpenAL, all the illusionary walls are market-made.

      Looking at the games I play in Windows, almost every one of them is using DirectX. Now, I am not qualified to know why but that is a fact. That means that to use OpenGL/OpenAL under Linux you either:
      a) Develop a Linux-only game
      b) Develop using your second choice on your primary platform
      c) Develop two code paths

      The first one is just not doable if say the Linux market is 10% of t
      • Re:Total bullshit (Score:5, Insightful)

        by strider44 (650833) on Saturday May 05, 2007 @08:46PM (#19006543)
        Why do you need to develop a linux-only game if you use an opengl/openal code path? OpenGL and OpenAL run fine on Windows, and if you use something like SDL or GLUT then you can even compile the exact same 3D graphics code on Windows and Linux.
    • by jd (1658) <.imipak. .at.> on Saturday May 05, 2007 @10:59PM (#19007253) Homepage Journal
      Ok, ok! So Linux supports OpenGL, OpenAL, OpenEXR, JPEG-2000, Open Inventor, the Renderman scene language and shaders, DirectX under WINE, Constructive Solid Geometry, Sound Fonts, 5.1 audio, audio raytracing, speech synthesis, efficient use of multi-core CPUs, real-time process scheduling and asynchronous I/O, but... What have the Romans ever done for us?
  • by Anonymous Coward
    Linux and other open source projects are getting a harsh lesson in what it is like to ship consumer grade software products. No more RTFM! No more 'did you submit a bug report???' No more this bug/problem is not our fault since we don't control such and such library we use.

    Project vs Product

    Everyone is impressed with how far you've progressed when you are working on a project.

    Everyone is pissed off with how much you've left undone when you are working on a product.

    Welcome to reality open source developers.
    • by howlingmadhowie (943150) on Saturday May 05, 2007 @05:42PM (#19004897)
      Linux and other open source projects are getting a harsh lesson in what it is like to ship consumer grade software products.

      um, you do know that linux has been the operating system of choice for supercomputers, webservers, special effects production, scientific computing etc. for a number of years now, don't you? because you seem to think that linux, freebsd, openbsd or whatever just suddenly turned up yesterday or something. are you also aware of the fact that a lot of people who write free and open-source software get paid good money to do so?
      • by Stevecrox (962208)
        Your missing his point, hes talking about the home desktop consumer market, most of the applications your describing are high technology market. They expect very different things and its where Linux fails. Your animator or scientist is going to write a program in a set langauge they don't care how it looks or how it works just that it does work. I remember a statement from a physicist along the lines of "tell me which langauge to learn and I'll program my simulations in it, just stop changing the languages!
  • ZFS definition (Score:3, Informative)

    by icepick72 (834363) on Saturday May 05, 2007 @05:03PM (#19004551)
  • Well, no. (Score:5, Informative)

    by c0l0 (826165) * on Saturday May 05, 2007 @05:13PM (#19004617) Homepage
    Alternativ approaches to implementing subsystems of the Linux kernel are often developed concurrently, in parallel, and there's a system you can compare to darwinistic evolution that decides (in most cases) which one of a given set of workalikes makes it into the mainline tree in the end. That's why the Linux kernel itself incorporates, or tries to adhere to, a UNIX-like philosophy - make a large system consist of small interchangeable parts that work well together and do one task as close to perfect as possible.
    That's why there are so many generic solutions to crucial things - like "md", a subsystem providing RAID-levels for any given blockdevice, or lvm, providing volume management for any given blockdevice. Once those parts are in place, you can easily mingle their functions together - md works very nice on top of lvm, and even so vice versa, since all block devices you "treat" with one of lvm's or md's functions/features, again, result in a block device. You can format one of these blockdevices with a filesystem of choice (even ZFS would be perfectly possible, I suppose), and then incorporate this filesystem by mounting to whereever you happen to feel like it.
    There are other concepts deep down in there in the kernel's inner workings that closely resemble this pattern of adaptability, like, for example, the vfs-layer, which defines a set of reuqirements every file-system has to adhere and comply to. This ensures a minimal set of viable functionality for any given filesystem, makes sure those crucial parts of the code are well-tested and optimized (since everyone _has_ to use them), and also makes it easier to implement new ideas (or filesystems, in this sepcific case).

    Now, zfs provides at least two of those already existing and very well working facilites, namely md and lvm, completely on its own. That's what's called "code-duplication" (or rather "feature-duplication" - I suppose that's more appropriate here), and it's generally known as a bad thing.
    I do notice that zfs happens to be very well-engineered, but this somewhat monolithic architecture still bears the probability of failure: suppose there's a crucial flaw found somewhere deep down in this complex system zfs inevitably is - chances are you've got to overhaul all of its interconnecting parts massivley.

    Suppose there's a filesystem developed in the future that's even better than zfs, or at least better suited to given tasks or workloads - wouldn't it be a shame if it had to implement mirroring, striping and volume-management again on its own?

    Take an approach like md and lvm, and that's not even worth wasting a single thought on. The systems are already there, and they're working fantastically (I'm an avid user of md and lvm for years by now, and I frankly cannot imagine anything doing these jobs noticeably better). I'd say that this system of interchangeable functional equivalents, and the philosophy of "one tool doing one job" is absolutely ideal for a distributed development model like Linux'.

    It seems to be working since the early nineties. There must be something right about it, I suppose.
    • by Cyberax (705495)
      But there's a problem - sometimes you need to do something across the layers...
      • Re:Well, no. (Score:5, Insightful)

        by KagatoLNX (141673) <> on Saturday May 05, 2007 @06:49PM (#19005625) Homepage
        The word "refactoring" applies here.

        When the layers don't meet your needs, you have two options.

        You can either violate the layering or you can get the layers refactored.

        In Linux, we do not accept the first. Why? Because it generates bad software...period.

        Writing drivers for MacOSX is a pain...because of the mingling between Mach, BSD, and everything else they did to make it work.

        Drivers for Windows has always been a source of instability because there isn't good layering there either. Try to write database code on Windows, the lack of coherent design presents dozens of incompatible interfaces with different features.

        You can do what these people do. You can make a "product" that "works" without regard to design. Eventually, you end up doing a complete rewrite. The fact of the matter is that Linus puts design before function, and maintainability before progress. As such, we move slow, we refactor, and we're generally slow. However, progress is steady and it does, generally, get better. Of course there are always people that want it to be everything.
  • Well (Score:3, Insightful)

    by (653730) on Saturday May 05, 2007 @05:15PM (#19004633)
    It's not just Andrew Morton, it's basically every core linux kernel hacker that has spoken on the issue.

    It's pretty obvious; I don't think that even the ZFS developers will deny it. They'll just say "it's a layering that was worth breaking".
  • by IpSo_ (21711) on Saturday May 05, 2007 @05:16PM (#19004643) Homepage Journal
    Reiser4 introduced us to all sorts of interesting capabilities never before seen in a file system (at the time) but I believe this same "layering violation" attitude pretty much put a stop to any of it getting into the kernel. The Reiser guys were forced to pretty much cripple their file system feature wise if they were to have any hope of getting it included in the kernel.

    See Reiser4 Pseudo Files [] as one example.

    I can understand that in certain cases "layering violations" are bad, but Linux kernel developers don't even seem to be willing to experiment or think outside the box at all.

    Both sides have valid arguments... I don't think there is any easy solution, but it would be nice to see more forward thinking in the community.

    • Re: (Score:3, Insightful)

      by l3v1 (787564)
      I can understand that in certain cases "layering violations" are bad, but Linux kernel developers don't even seem to be willing to experiment or think outside the box at all

      Such comments are just simply wierd. You people seem to think everybody is a genius but the linux kernel devs. They are the ones who can't think otherwise, they have the fault of following rigid rules, the are to be blamed that wonderful innovations don't follow the rules, they should think outside of the box and the rest don't even b
  • Linux as a cli is robust, mature. I work every day via ssh or terminal and I manage a number of servers this way, it's a pleasure.

    But when I look at Linux as a viable desktop alternative for the non compsci crowd I tend to cringe. The patchwork that can make Linux so flexible, that *really* puts *you* in charge is the exact thing that makes Linux so unfriendly. Most people don't want tonnes of choice, not because their stupid, but because they don't want to spend a lot of time fussing with their computer.
    • by Peaker (72084)
      Try Ubuntu/Kubuntu.

      They don't strip away your choices, but they sure dumb them down so unless you really want to - you are not aware of having to make any decision.

      Linux is ready for the desktop - and it is already in many desktops.
      Many people don't use it not because its not ready, but because they don't know how to burn a CD, how to boot from it, what "installing an OS" means, and because they are afraid. Not because Linux "is not ready" and any of that nonsense.
  • I don't know anything about ZFS, but I think his general point may have merit. Consider the problem of speeding up the boot process. This would require interaction between desktop hackers, init hackers, filesystem hackers, etc. etc. Many possible speedups might require layering violations (desktop application making requests about desired file layout on the disk etc.) Due to the technical, social, political structure of Linux this is just unlikely to happen (unless a single distro has enough resources to th
    • by try_anything (880404) on Saturday May 05, 2007 @09:30PM (#19006797)
      When your design constrains you, you don't throw it away and proceed into anarchy. You create a new design. If the layering scheme kills performance, the fix is to create a new layering scheme that accomodates the performance improvements you want to make.

      Having a coherent design is what allows people to reason about the system as a whole. Breach the design, and suddenly nobody can say anything about anything without tracking down and understanding all of the code involved. Commercial companies do this all the time when playing catch-up with rivals, because they have to retain their customers at all costs, but they suffer terribly for it in maintenance costs and stability. There's no reason in this case for Linux to take the fast, self-destructive route. Linux can wait for a coherent solution, even if it is years coming.

  • by krbvroc1 (725200) on Saturday May 05, 2007 @05:30PM (#19004759)
    Layers are both easier to code, to understand, and to test. Layers/boundaries between software are your friend. To some degree that is why the Internet, based upon a layered network model (TCP on top of IP on top of Ethernet) is so diverse.

    Layering is what keeps things manageable. One you start getting your software tentacles into several layers you make a mess of things for both yourself and others. Its a tradeoff--complexity/speed vs simplicity/maintainability/interoperability.
    • That's fine (Score:5, Insightful)

      by Sycraft-fu (314770) on Saturday May 05, 2007 @05:57PM (#19005071)
      But the OSI layers are guidelines that help design things, not rigid levels that must be maintained. They are mixed up ALL the time. As a simple example, see Layer-3 switches. These are devices that simultaneously work at Layer 2 and 3 when it comes to dealing with traffic. They break down the traditional view of a separate router and switch, and they are good at what they do. There's plenty of stuff at the high end that's similar. Many things that are part of the presentation layer are actually done by the application (like encryption in SSH) and so on.

      There's nothing wrong with having a layered design philosophy as it can help people decide what their product needs to do, and what it needs to talk to. For example if I am designing an application that works over TCP/IP, I really don't need to worry about anything under layer 4 or 5. However it shouldn't be this rigid thing that each layer must remain separate, and anything that combines them is bad. I don't need to, and shouldn't, take the idea that my app can't do anything that would technically be Layer 6 itself. Likewise in other situations I might find that TCP just doesn't work and I need to use UDP instead, but still have a session which I handle in my own code (games often do this).

      Had we stuck to the OSI model as a maximum, rather than a guiding principle, with the Internet, it probably wouldn't have scaled to the point we have now.
      • Re:That's fine (Score:4, Insightful)

        by einhverfr (238914) <chris.travers@gm ... om minus painter> on Sunday May 06, 2007 @02:26AM (#19008255) Homepage Journal
        Evidently you have never really followed the development of OSI. The only reason they are taught as "guidelines" today is that the OSI stacks that intended to implement it fell flat because they were horribly bloated monstrosities which solved all the wrong problems in all the wrong ways and took too long to get to market. On the other hand, the TCP/IP 4-layer model is actually pretty strict (as the OSI model was designed to be).

        Just to provide some context, the OSI initiative was an attempt by the UN ITU and other bodies to create an ultimate convergence network capable of adequately handling data and voice across the same physical links. Many of the layers in the OSI protocol diagram (such as the data link layer) are designed to merge circuit-switched and packet-switched paradigms. The idea was that if you can provide the flexibility to create virtual circuits for voice traffic and still handle packets with the remaining bandwidth, you would not need separate network access points for your internet and voice traffic. Many of the OSI protocols (such as H.323) assume that such virtual circtuits are available which is why they are so cumbersome over TCP/IP.

        I personally think that the OSI board designed the wrong kind of network for the wrong kind of problems. It is better to have a TCP/IP model, perhaps multiplexed with voice over ATM than to have intimate integration between such fundamentally different services. I also think that if people are going to teach the OSI model, they need to also teach the OSI design goals and those protocols which are still based on it: X.400, X.500 (and LDAP, which is basically X.500 over TCP/IP), X.509 (and hence SSL), H.323, T.120, and ASN.1.

        Most of the time, when people start getting experience with these protocols they run screaming from anything OSI ;-)
  • by Mad Quacker (3327) on Saturday May 05, 2007 @05:34PM (#19004815) Homepage
    Open source software gets better because new people want new features to which they contribute. You can't blame Andrew Morton for disliking what ZFS is going to do, this is just how people work. This is why they say you can't teach an old dog new tricks.

    That said ZFS is one of the coolest things to happen to your files in a long time. The current disk block device usage is basically the same from the beginning of computing, it is ancient and actually quite stupid. Over decades layers keep getting added to it to make it more robust, but really it's a monstrosity. Partitions are dumb, LVM is dumb, disk block RAID is dumb, monolithic filesystems are dumb. All the current linux filesystems should be thrown out.

    I don't want to care how big my partitions are, what level parity protection my disks have, or any of that junk. I want to add or remove storage hardware whenever I want, and I want my files bit-exact, and I want to choose at will for each file what the speed vs protection from hardware failure is. Why shouldn't one file be mirrored, the next be stripped, and the next have parity or double parity protection? Why can't very, very important files have two or three mirrors?

    From the current status of ZFS however I think this could be quickly built using GPL 2+ by one or two determined people, and it would involve gutting the linux file systems.
  • by DaleGlass (1068434) on Saturday May 05, 2007 @05:41PM (#19004887) Homepage
    Layers might not be ideal, but they're consistent. The filesystem does its part, RAID/LVM does its own, etc.

    ZFS seems to want to take all over the disk subsystem. Why? Is there a reason why it needs its own snapshot capabilities, instead of just using LVM?

    These sorts of things always smell fishy to me, due to a feeling that once you start using it, it locks you in more and more until you're doing it all in this new wonderful way that's incompatible with everything else. Even though it's open source, it's still inconvenient.

    This approach reminds me a lot of DJB's software: If you try to get djbdns you'll be also strongly suggested to use daemontools as well. The resulting system is rather unlike anything else, and a reason why many people avoid DJB's software.
    • by lokedhs (672255) on Saturday May 05, 2007 @06:18PM (#19005307)

      ZFS seems to want to take all over the disk subsystem. Why? Is there a reason why it needs its own snapshot capabilities, instead of just using LVM?
      Because there are many things your storage system can do if it has knowledge of the entire stack.

      The problem with a "traditional" layered model is that the file system has to assume that the underlying storage device is a single consistent unit of storage, where a single write either succeeds, or it fails (in which case the data you wrote may or may not have been written). This all sounds very good and file systems like ext2 are written based on this assumtion.

      However, if the underlying storage system is RAID5, and there is a power loss during the write, the entire stripe can become corrupt (read the Wikipedia article [] on the subject for more information). The file system can't solve this problem because it has no knowledge about the underlying storage stucture.

      ZFS solves this problem in two ways, both of which reuires the storage model to be part of the filesystem:

      1. Each physical write never overwrites "live" data on the disk. It writes the stripe to a new location, and once it's been completely committed to disk the old data is marked as free.
      2. ZFS uses variable stripe width, so that it does not have to write larger stripes than nescessary. In other words, a large write can be directly translated to a write to a large stripe on the sotrage system, and a smaller write can use a smaller stripe width. This can improve performance since it can reduce the amount of data written.
      There are plenty of other areas where this integration is needed, including snapshotting, but I hope the above explanation explains that the layered model is not always good.
  • holistic vs mess (Score:2, Interesting)

    by nanosquid (1074949)
    UNIX and Linux design is quite holistic: features are often added at various levels of the system in order to make a whole work out. For example, desktop search support had both user and kernel space components, Beryl/Compiz-style interfaces have triggered changes in Gnome, X11, and the kernel, etc.

    UNIX and Linux have been careful about avoiding simplistic designs. ZFS is a simple, obvious answer to a problem: just pack all the functionality into one big codeball and start hacking. Microsoft does a lot o
  • by Anonymous Coward on Saturday May 05, 2007 @06:07PM (#19005171)
    "I mean, although ZFS is a rampant layering violation and we can do a lot of
      the things in there (without doing it all in the fs!) I don't think we can
      do all of it." []

    It sounds like his main point was pointing out problems with the current file system, rather than saying ZFS is bad. I bet he simply thinks they should try to implement a much better file system than ext3 without breaking the current layering scheme. I don't see why this is so bad. Why not try it, and if it fails miserably, ZFS is already here.

    I think the author of the article took everything out of context and was just looking for some ammo against Linux. His blog post sucked. He just says the same crap that everyone always says. I'm not saying there are no problems, but I don't see how any of the problems relate to Andrew Morton saying the Linux file systems need to be upgraded/replaced.
  • by DragonWriter (970822) on Saturday May 05, 2007 @06:28PM (#19005407)

    Siracusa ultimately believes that the ability to achieve such a break is more likely to emerge within an authoritative, top-down organization than from a grass-roots, fractious community such as Linux.

    Nothing stops an "authoritative, top-down organization" from taking all the open-source work done on Linux, and applying its own methodology to driving it forward; if that's more effective than what everyone else in the Linux community is doing, users will be more interested in adopting what they do with it (and, heck, once the transition occurs, the less-centralized portions of the community will probably follow along and start working on the "Neo-Linux" thus produced.)

    Its true that revolutionary, rather than evolutionary, change is probably best driven by a narrow committed group with a shared vision and the skills to realize than a disorganized community. But there is no barrier to that within Linux; and between the occasional revolutionary changes, the evolutionary changes that the community is very good at will still remain important. With open source, you don't have to choose: you can have a top-down narrow group working on revolutionary changes (you can have many of them working on different competing visions of revolutionary changes, which, given the risk involved in revolutionary change, is a good thing), all while the community at large continues plugging away on evolutionary changes to the base system—and if once one of the revolutionary variants attracts attention, begins working on evolutionary improvements to that, too.
  • by codemachine (245871) on Saturday May 05, 2007 @07:08PM (#19005825)
    I've heard from people looking to port ZFS to the BSDs that it is a very difficult port, due to how tangled up it is with the operating system (Solaris) itself, and how much ZFS does for itself (volume managment, RAID, etc).

    On one hand, this gives it some serious advantages when run on Solaris 10. But it also makes it difficult to port. I wonder if that is partially responsible for delaying OS X Leopard?
  • by taxman2007 (1087327) on Saturday May 05, 2007 @08:54PM (#19006581)
    First, and most importantly, Siracusa never states or even suggest "this attitude" is "responsible for all of the current failings of Linux ".

    The direct quote is "I've long seen the Linux community's inability to design, plan, and act in a holistic manner as its greatest weakness."

    You can see the meaning has been completely changed in the summary from one of positive criticism to one of arrogant condemnation.

    Through this change, we can see the posters true feelings, feelings that are shared by many in the Linux community. That is to respond immaturely and get all bent out of shape if somebody builds anything that doesn't follow the "Linux philosophy".

    The Truth. Both Linux in general, and ZFS are amazing, and powerful tool. One of best philosophy I've encountered is "use the right tool for the job".

    Nobody is forcing Linux devs to port ZFS, or even use, or even think about it. The only reason this is an issue, is because many in the Linux community realize how powerful ZFS is, and they're subconsciously pissed off that they can't have it. So they respond like a 3rd grade bully by attacking it in a self defeating attempt to minimize its importance.

    • Re: (Score:3, Interesting)

      by anaesthetica (596507)
      Yes, when you cut off the next sentence from the "direct quote" it sure does sound much more positive.

      ...the Linux community's inability to design, plan, and act in a holistic manner (i.e., "across layers") as its greatest weakness. Linux on the desktop, user-friendly Linux, the consumer Linux software market, Linux games--all the historic struggles in all these areas can be adequately explained solely in terms of this one failing.
  • by geekyMD (812672) on Saturday May 05, 2007 @09:45PM (#19006887)
    I'd love to used Linux as a desktop. I just need a little help.

    I even enjoy spending time tweaking my desktop computer, from back in the days when memory came in 16k chips, IRQs had to be tediously managed, and squeezing every drop out of 640k was fun. But try as I might I have yet to get a stable, visually appealing, or useful version of linux on any of my previous 3 computers. Why? Because I can't even get a minimally functional system running, and give up before I get to the tweaking stage.

    Major problems I encountered which I spend more than 1/2 an hour working on each: picking a distro, much harder than you think for the non-initiate. KDE vs Gnome? Utterly crappy (ie Mac 6) video support without special do-it-get-it-complile-it yourself drivers. Can't install video drivers, I didn't install gcc (silly me). Can't install video drivers, I'm missing some contingencies. Can't install video drivers, I didn't install the source code for the kernel (silly me). Multiple conflicting versions of drivers and conflicting advice about which one to use. Multiple conflicting instructions on how to install said video drivers. Video driver installer has reams of text output, some of which are error messages. Based on more advice, appearantly these error messages may or may not be normal and may or may not be why I never got good video output. My sound card stopped working. I still don't know why.

    Valuing my time at a paltry $50 an hour, I could have easily bought a newer better system with WinXP on it and then taken my wife out to dinner with the remainder.

    If anyone can recommend a distro that will run, out-of-box, on my Dell e1505 with an ATI x1400 graphics card and Creative Audigy soundcard, then I promise you I will excitedly hunt it down and intall it, I really do want to switch to linux, the visuals I've seen other users have is incredible.

    Unfortunately the fact that I have to ask such a question really shows how linux in general is completly unprepared for the desktop market. Prove me wrong and recommend a distro.

    PS - please, no berating, calling-of-noob'ing, or general fun making at my expense. I really honestly do want help, and Linux people have tried to help me in these ways before. (they haven't proven helpful yet)
  • by udippel (562132) on Saturday May 05, 2007 @11:05PM (#19007283)

    As an older slashdotter, I am quite disappointed with the discussion so far. A few have suggested to discuss the topic in question, respectively ZFS. But, as so often, we can make out that people just blindly speak without having read neither the original article, nor about ZFS.

    ZFS solves about all and any problems we have had with filesystems since FAT, and this same community was pretty enthusiast in 1/16/2036242 [].

    Most of all, to me, I am astonished that almost everyone talks 'virtualisation', VM, QEMU, Xen.
    When it comes to filesystems, suddenly many seem to want to do everything on their own, on physical platters: partition, volumes/RAID, format. ZFS is a virtual filesystem, where none of such is physically needed. There is a nice basics/ [] demo on how to create 100 mirrored filesystem within 5 minutes.

    Of course, filesystem should be a black box, an object, instead of the user having to do low-level work. ZFS provides this, and more relevant: of course it needs to be cross-layered therefore.

    Snapshots ought to be available easily, at any moment in time, without taking much space. ZFS does so, by only storing the changes and sharing the unmodified data. If you want to do so, you need an abstraction of the hardware. That is, crossing layers. Not to mention writeable snapshots.

    Adding new drives without partitioning, slicing, formatting. Just adding to the existing pool. Inclusive striping being adapted automagically. This needs a cross-layer interface, right ?

    The transactional filesystem guarantees uncorrupted data at power failures and OS crashes. If you do this across a pool of physical platters, you need operations across layers.

    There is an interesting blog on the usage of ZFS for home users. It contains some good arguments, why ZFS is useful for Linux' Desktop Stride. You find it here: e.html []

    Last ot least, the online checking of all your data ('scrubbing' and 'resilvering') is a valuable feature for Linux (and the home user) as well.

    To me it looks like, as of today, that about everyone liked the features of ZFS. Now, as it requires to break some old habits, suddenly we resist change and rather stick to older concepts.
    As if GPLv2 vs GPLv3 was not enough of a threat to Linux, now we unashamedly permit a new-from-the-bottom-up filesystem to overtake us as well ?

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.