Does Linux "Fail To Think Across Layers?" 521
John Siracusa writes a brief article at Ars Technica pointing out an exchange between Andrew Morton, a lead developer of the Linux kernel, and a ZFS developer. Morton accused ZFS of being a "rampant layering violation." Siracusa states that this attitude of refusing to think holistically ("across layers") is responsible for all of the current failings of Linux — desktop adoption, user-friendliness, consumer software, and gaming. ZFS is effective because it crosses the lines set by conventional wisdom. Siracusa ultimately believes that the ability to achieve such a break is more likely to emerge within an authoritative, top-down organization than from a grass-roots, fractious community such as Linux.
Merit (Score:2, Informative)
Windows was a hamstrung peformer for graphics until NT 4.0 saw rearchitecture [microsoft.com] which placed key portions of the OS (including 3rd-party graphics drivers) at a much lower level.
Re: (Score:3, Insightful)
Re:Merit (Score:5, Insightful)
Re: (Score:3, Interesting)
Yes, yes, yes. Given NT's connection with VMS I would expect the architecture to be sound and well thought out. Furthermore, I don't think anyone (in this thread at least) has said anything that sounded like "Windows totally devoid of all worth"
Re:Merit (Score:4, Informative)
Think of subsystems as being like shells with system-specific behavior. For example, filenames are case-sensitive in the POSIX subsystem but not in the Win32 subsystem.
Honestly, I think that WIndows has the *wrong* layers. The subsystem layer was intended to allow for compatibility with software written for other operating systems but to my knowledge only the Win32 subsystem has ever been consistantly maintained (the POSIX subsystem is maintained at the moment, but only *after* Microsoft bought OpenNT). Windows doesn't need this functionality, but they really need nice VFS and inode layers in their filesystem.
Finally, the grandparent's post about NT4 being a credible gaming platform is just laughable. I don't even know where to start. It seems to me that it is more likely to have been made to get additional performance out of CAD/CAM applications which also use 3d acceleration. So you are write about the GP poster not knowing what he writes about.
Re: (Score:2)
...and then Vista moved them back?
Re:Hard to dis (Score:5, Informative)
You're right, that's why nobody is using Linux for real systems [top500.org].</sarcasm>
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
As a PS...
For Vista - NVidia and ATI had to write the entire driver from scratch. From GPU Scheduling, RAM Virtualization to tons of other Vista features of the WDDM, make the leap quite significant.
However the thing people don't see to understand, even if you have Video card that has a crap driver availabl
Re: (Score:3, Informative)
Nonsense. Modern video hardware is predominantly driven by DMA, which requires an insignifcant number of kernel calls after initial setup. The rest of your points are just as empty and/or misinformed as your first, not worth a response.
Re: (Score:3, Informative)
Google for linux windows games performance.
authoritative, top-down organization (Score:3, Interesting)
Not saying the linux development community should be a democracy with everything voted on or whatnot, just saying that there may be creative approaches that have yet to be explored. You'd think smart people with a penchant for game theory would be working on it.
Food for thought.
Democracy Sucks. (Score:3, Interesting)
With Democracies, you end up with the tyranny of the majority, regardless of whether the minority opinion is the correct one. Under a Republic form, a large enough minority can plug up the works and force negotiation with the majority before a final solution is agreed upon.
The Linux Development community needs representative decision making, there are too many voters, hence, almost no direction or real progress towards a cohesive goal
Re: (Score:2)
Your argument assumes that the "Linux Development community" (whatever that is) has, needs, or wants a common goal. There has been project after project which were supposedly to unify the "linux community" or "open source community," but historically every single one has fallen apart when it became obvious that the majority of people that the
Re: (Score:2, Interesting)
With Democracies, you end up with the tyranny of the majority, regardless of whether the minority opinion is the correct one.
Yes, a tyranny of the minority is clearly better.
Hint: The only correct opinion regarding the state is the will of its subjects.
Re: (Score:2)
The only correct opinion regarding the state is the will of its subjects.
But measuring that will is harder than you may think. For instance, when there are more than 3 options, plurality voting (i.e. select the one that gets the most votes) is completely broken, as it unfairly rewards the choice that is the most different from other choices (that is, it is subject to vote splitting).
And as what the previous poster called "tyranny of the majority", typical voting will weigh the votes of everyone equally, which doesn't work well for things where, for instance, a slight minority
Re:Democracy Sucks. (Score:4, Insightful)
Representative republic is JUST A FORM OF DEMOCRATIC GOVERNMENT.
Re: (Score:2)
There's a lot of overlap there, but a republic can include a number of checks against the will of the people, while a true democracy doesn't pretty much by definition.
Re: (Score:2)
Republics have their origins in fascism, and served as a tool to help unify local rulers into larger, more cohesive nation.
Re:Democracy Sucks. (Score:5, Insightful)
Re: (Score:2)
If (in my little fantasy world) the constitution had been written with the input from modern day game theorists and election theorists, I'd think it could be massively improved. For example, our destructive two party system is a simple (and unnecessary) by product of plurality voting. (example: http://k [karmatics.com]
Re: (Score:3, Insightful)
With Democracies, you end up with the tyranny of the majority, regardless of whether the minority opinion is the correct one. Under a Republic form, a large enough minority can plug up the works and force negotiation with the majority before a final solution is agreed upon.
Says the only two-party state I know of. Whichever party has 52% this term screws over the other 48% without flinching. If you wanted negotiation, you should look to Eu
Re:authoritative, top-down organization (Score:4, Insightful)
The best solution would be for the Linux Kernel project to say, "Open source developers can do as they please, but we here at the Kernel project encourage developers to contribute to THESE specific projects: Gnome, Open Office, etc...
The open source community is massive, but development will take an eternity until a majority of the community starts to support ONE software solution over it's alternatives.
Re: (Score:2)
That is not going to happen, but if it did it would not include Gnome [gnome.org].
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Is that the point of OSS, or its biggest weakness? (Score:3, Insightful)
Sorry, but I can't agree with your reasoning. To explain, let met set out a few realities of software development, as I've personally come to see them after some years as a developer:
Re: (Score:3, Insightful)
Linux discipline (Score:5, Interesting)
The practice, as I see it is: "The current rules (layering, etc.) are enforced rigourously (at least in Linus' tree) but radical rewrites
of the rules take place relatively often"
So if ZFS really does achieve wonderful things by violating the current layering it WON'T be accepted for Linux's kernel, but, if Linus can be convinced (via an appropriate chain of lieutenants, usually) that the layering is really an obstacle to achieving these things, we might see a completely new layering appear in 2.6.25 or somewhere, into which ZFS can fit. The inefficiency
comes from the number of substantial pieces of work that get dropped because they don't fit in, or were misconceived. A more economically rational system would try to kill them sooner. Also, inefficiency arises from the fact that changing the filesystem layering would require every existing filesystem to be rewritten. Linux is notoriously unfazed by this, but in a commercial world, I suspect this would be too hard to swallow and you'd end up with all your filesystems fitting into the model except one, from whence come bugs and code cruft.
Re: (Score:3, Informative)
File systems are the most complex part of Linux (Score:4, Interesting)
I maintain a Linux file system which is typically used across various kernel versions, including 2.4.x. Yes folk's 2.4.x is still used to ship new products. The changing interface makes for Fun-And-Games.
The VFS to file system is not particularly clean as you need to do pretty ugly things like increment page counts etc within the file system. Much of this is done to enhance performance, but could probably have been done better (ie. preserving a clean interface without real performance compromises).
Re:Linux discipline (Score:4, Insightful)
Total bullshit (Score:5, Interesting)
I think I can explain (Score:3, Insightful)
The moving sucky one has ninety plus percent of the home desktop market. Linux has less than one percent, and I've never seen any credible figures suggesting otherwise. Why target a tiny niche market when you can target a huge one?
And bear in mind, the proportion of linux users who are serious about gaming and do not have access to a windows machine is probably one percent of Linux users. So even if you target windows, ninety ni
Re: (Score:2, Interesting)
Re: (Score:2)
The quake 3 arena I bought in 1999 still runs like a champ on my current linux desktop running SuSE 10.2. Other native linux games that run nicely are doom3, quake 4, ut2004, RtCW. ET, etc.
The "barriers" to linmux gaming are not technical at all, they are political, if they exist at all.
Re: (Score:3, Insightful)
Re:Total bullshit (Score:5, Informative)
I'm still using a copy of AutoCAD released in 1995 for the Windows 3.1 Win32S API, and it works fine in Windows 2000 and Windows XP except for that it's got the old 8.3 filename limitation. I am still using WordPerfect Suite 8, the current version is 13, I think. I know someone that is still using Corel Draw 7, the current version is 13. All these programs still work fine in XP/2000, and I think that is a splendid record for binaries that were unpatched between Windows updates.
The DirectX architecture has changed between the 9X and the NT lines, but otherwise, the legacy APIS are generally well-preserved and allows very complex software to work without a patch.
Re: (Score:3, Insightful)
Looking at the games I play in Windows, almost every one of them is using DirectX. Now, I am not qualified to know why but that is a fact. That means that to use OpenGL/OpenAL under Linux you either:
a) Develop a Linux-only game
b) Develop using your second choice on your primary platform
c) Develop two code paths
The first one is just not doable if say the Linux market is 10% of t
Re:Total bullshit (Score:5, Insightful)
Re: (Score:3, Funny)
Re:Total bullshit (Score:5, Funny)
Welcome To Reality Open Source (Score:2, Insightful)
Project vs Product
Everyone is impressed with how far you've progressed when you are working on a project.
Everyone is pissed off with how much you've left undone when you are working on a product.
Welcome to reality open source developers.
Re:Welcome To Reality Open Source (Score:5, Insightful)
um, you do know that linux has been the operating system of choice for supercomputers, webservers, special effects production, scientific computing etc. for a number of years now, don't you? because you seem to think that linux, freebsd, openbsd or whatever just suddenly turned up yesterday or something. are you also aware of the fact that a lot of people who write free and open-source software get paid good money to do so?
Re: (Score:2)
As you said, its 2007 (Score:2)
You don't need to know anything to get those running.
Their installation is far easier than the Windows installation, and most of the common things people do "just work". Those rare occasions that don't "just work" have very simple step-by-step howto's all over the place.
Did you really install Windows? (Score:3, Insightful)
It actually offers more options with more terminology than the Ubuntu/Kubuntu installer.
The Ubuntu installer offers you install options:
"Simple - use free space"
"Simple - overwrite whole disk"
"Advanced - Setup your own partition table"
Ofcourse most users can choose one of the simple options. The advanced one has a nice GUI to resize partitions and basically do everything from a GUI.
In Windows its a bi
Re: (Score:2)
ZFS definition (Score:3, Informative)
Well, no. (Score:5, Informative)
That's why there are so many generic solutions to crucial things - like "md", a subsystem providing RAID-levels for any given blockdevice, or lvm, providing volume management for any given blockdevice. Once those parts are in place, you can easily mingle their functions together - md works very nice on top of lvm, and even so vice versa, since all block devices you "treat" with one of lvm's or md's functions/features, again, result in a block device. You can format one of these blockdevices with a filesystem of choice (even ZFS would be perfectly possible, I suppose), and then incorporate this filesystem by mounting to whereever you happen to feel like it.
There are other concepts deep down in there in the kernel's inner workings that closely resemble this pattern of adaptability, like, for example, the vfs-layer, which defines a set of reuqirements every file-system has to adhere and comply to. This ensures a minimal set of viable functionality for any given filesystem, makes sure those crucial parts of the code are well-tested and optimized (since everyone _has_ to use them), and also makes it easier to implement new ideas (or filesystems, in this sepcific case).
Now, zfs provides at least two of those already existing and very well working facilites, namely md and lvm, completely on its own. That's what's called "code-duplication" (or rather "feature-duplication" - I suppose that's more appropriate here), and it's generally known as a bad thing.
I do notice that zfs happens to be very well-engineered, but this somewhat monolithic architecture still bears the probability of failure: suppose there's a crucial flaw found somewhere deep down in this complex system zfs inevitably is - chances are you've got to overhaul all of its interconnecting parts massivley.
Suppose there's a filesystem developed in the future that's even better than zfs, or at least better suited to given tasks or workloads - wouldn't it be a shame if it had to implement mirroring, striping and volume-management again on its own?
Take an approach like md and lvm, and that's not even worth wasting a single thought on. The systems are already there, and they're working fantastically (I'm an avid user of md and lvm for years by now, and I frankly cannot imagine anything doing these jobs noticeably better). I'd say that this system of interchangeable functional equivalents, and the philosophy of "one tool doing one job" is absolutely ideal for a distributed development model like Linux'.
It seems to be working since the early nineties. There must be something right about it, I suppose.
Re: (Score:2)
Re:Well, no. (Score:5, Insightful)
When the layers don't meet your needs, you have two options.
You can either violate the layering or you can get the layers refactored.
In Linux, we do not accept the first. Why? Because it generates bad software...period.
Writing drivers for MacOSX is a pain...because of the mingling between Mach, BSD, and everything else they did to make it work.
Drivers for Windows has always been a source of instability because there isn't good layering there either. Try to write database code on Windows, the lack of coherent design presents dozens of incompatible interfaces with different features.
You can do what these people do. You can make a "product" that "works" without regard to design. Eventually, you end up doing a complete rewrite. The fact of the matter is that Linus puts design before function, and maintainability before progress. As such, we move slow, we refactor, and we're generally slow. However, progress is steady and it does, generally, get better. Of course there are always people that want it to be everything.
Well (Score:3, Insightful)
It's pretty obvious; I don't think that even the ZFS developers will deny it. They'll just say "it's a layering that was worth breaking".
Re: (Score:2)
I think the same issue is hurting Reiser4... (Score:5, Interesting)
See Reiser4 Pseudo Files [namesys.com] as one example.
I can understand that in certain cases "layering violations" are bad, but Linux kernel developers don't even seem to be willing to experiment or think outside the box at all.
Both sides have valid arguments... I don't think there is any easy solution, but it would be nice to see more forward thinking in the community.
Re: (Score:3, Insightful)
Such comments are just simply wierd. You people seem to think everybody is a genius but the linux kernel devs. They are the ones who can't think otherwise, they have the fault of following rigid rules, the are to be blamed that wonderful innovations don't follow the rules, they should think outside of the box and the rest don't even b
Thank God (Score:3, Funny)
Thank God for that. C++ is an abomination. It's not good at OO, it's not strictly procedural. Hell, it's not even clean.
They use an interface that literally emulates an ancient teletype.
Hey! Don't talk about GNOME like that!
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
There was talk just last week again about taking another crack at getting it included in the Kernel.
He's right... (Score:2)
But when I look at Linux as a viable desktop alternative for the non compsci crowd I tend to cringe. The patchwork that can make Linux so flexible, that *really* puts *you* in charge is the exact thing that makes Linux so unfriendly. Most people don't want tonnes of choice, not because their stupid, but because they don't want to spend a lot of time fussing with their computer.
Blah (Score:2)
They don't strip away your choices, but they sure dumb them down so unless you really want to - you are not aware of having to make any decision.
Linux is ready for the desktop - and it is already in many desktops.
Many people don't use it not because its not ready, but because they don't know how to burn a CD, how to boot from it, what "installing an OS" means, and because they are afraid. Not because Linux "is not ready" and any of that nonsense.
An example: speeding up the boot process (Score:2)
Re:An example: speeding up the boot process (Score:4, Insightful)
Having a coherent design is what allows people to reason about the system as a whole. Breach the design, and suddenly nobody can say anything about anything without tracking down and understanding all of the code involved. Commercial companies do this all the time when playing catch-up with rivals, because they have to retain their customers at all costs, but they suffer terribly for it in maintenance costs and stability. There's no reason in this case for Linux to take the fast, self-destructive route. Linux can wait for a coherent solution, even if it is years coming.
Its easier to handle layers mentally (Score:5, Insightful)
Layering is what keeps things manageable. One you start getting your software tentacles into several layers you make a mess of things for both yourself and others. Its a tradeoff--complexity/speed vs simplicity/maintainability/interoperability.
That's fine (Score:5, Insightful)
There's nothing wrong with having a layered design philosophy as it can help people decide what their product needs to do, and what it needs to talk to. For example if I am designing an application that works over TCP/IP, I really don't need to worry about anything under layer 4 or 5. However it shouldn't be this rigid thing that each layer must remain separate, and anything that combines them is bad. I don't need to, and shouldn't, take the idea that my app can't do anything that would technically be Layer 6 itself. Likewise in other situations I might find that TCP just doesn't work and I need to use UDP instead, but still have a session which I handle in my own code (games often do this).
Had we stuck to the OSI model as a maximum, rather than a guiding principle, with the Internet, it probably wouldn't have scaled to the point we have now.
Re:That's fine (Score:4, Insightful)
Just to provide some context, the OSI initiative was an attempt by the UN ITU and other bodies to create an ultimate convergence network capable of adequately handling data and voice across the same physical links. Many of the layers in the OSI protocol diagram (such as the data link layer) are designed to merge circuit-switched and packet-switched paradigms. The idea was that if you can provide the flexibility to create virtual circuits for voice traffic and still handle packets with the remaining bandwidth, you would not need separate network access points for your internet and voice traffic. Many of the OSI protocols (such as H.323) assume that such virtual circtuits are available which is why they are so cumbersome over TCP/IP.
I personally think that the OSI board designed the wrong kind of network for the wrong kind of problems. It is better to have a TCP/IP model, perhaps multiplexed with voice over ATM than to have intimate integration between such fundamentally different services. I also think that if people are going to teach the OSI model, they need to also teach the OSI design goals and those protocols which are still based on it: X.400, X.500 (and LDAP, which is basically X.500 over TCP/IP), X.509 (and hence SSL), H.323, T.120, and ASN.1.
Most of the time, when people start getting experience with these protocols they run screaming from anything OSI
Linux does not think (Score:5, Insightful)
That said ZFS is one of the coolest things to happen to your files in a long time. The current disk block device usage is basically the same from the beginning of computing, it is ancient and actually quite stupid. Over decades layers keep getting added to it to make it more robust, but really it's a monstrosity. Partitions are dumb, LVM is dumb, disk block RAID is dumb, monolithic filesystems are dumb. All the current linux filesystems should be thrown out.
I don't want to care how big my partitions are, what level parity protection my disks have, or any of that junk. I want to add or remove storage hardware whenever I want, and I want my files bit-exact, and I want to choose at will for each file what the speed vs protection from hardware failure is. Why shouldn't one file be mirrored, the next be stripped, and the next have parity or double parity protection? Why can't very, very important files have two or three mirrors?
From the current status of ZFS however I think this could be quickly built using GPL 2+ by one or two determined people, and it would involve gutting the linux file systems.
It sounds cool, but I think I like the layers more (Score:3, Interesting)
ZFS seems to want to take all over the disk subsystem. Why? Is there a reason why it needs its own snapshot capabilities, instead of just using LVM?
These sorts of things always smell fishy to me, due to a feeling that once you start using it, it locks you in more and more until you're doing it all in this new wonderful way that's incompatible with everything else. Even though it's open source, it's still inconvenient.
This approach reminds me a lot of DJB's software: If you try to get djbdns you'll be also strongly suggested to use daemontools as well. The resulting system is rather unlike anything else, and a reason why many people avoid DJB's software.
Re:It sounds cool, but I think I like the layers m (Score:5, Informative)
The problem with a "traditional" layered model is that the file system has to assume that the underlying storage device is a single consistent unit of storage, where a single write either succeeds, or it fails (in which case the data you wrote may or may not have been written). This all sounds very good and file systems like ext2 are written based on this assumtion.
However, if the underlying storage system is RAID5, and there is a power loss during the write, the entire stripe can become corrupt (read the Wikipedia article [wikipedia.org] on the subject for more information). The file system can't solve this problem because it has no knowledge about the underlying storage stucture.
ZFS solves this problem in two ways, both of which reuires the storage model to be part of the filesystem:
holistic vs mess (Score:2, Interesting)
UNIX and Linux have been careful about avoiding simplistic designs. ZFS is a simple, obvious answer to a problem: just pack all the functionality into one big codeball and start hacking. Microsoft does a lot o
Anyone read what Andrew Morton actually said? (Score:4, Insightful)
the things in there (without doing it all in the fs!) I don't think we can
do all of it." http://lkml.org/lkml/2006/6/9/409 [lkml.org]
It sounds like his main point was pointing out problems with the current file system, rather than saying ZFS is bad. I bet he simply thinks they should try to implement a much better file system than ext3 without breaking the current layering scheme. I don't see why this is so bad. Why not try it, and if it fails miserably, ZFS is already here.
I think the author of the article took everything out of context and was just looking for some ammo against Linux. His blog post sucked. He just says the same crap that everyone always says. I'm not saying there are no problems, but I don't see how any of the problems relate to Andrew Morton saying the Linux file systems need to be upgraded/replaced.
Revolution and evolution (Score:3, Insightful)
Nothing stops an "authoritative, top-down organization" from taking all the open-source work done on Linux, and applying its own methodology to driving it forward; if that's more effective than what everyone else in the Linux community is doing, users will be more interested in adopting what they do with it (and, heck, once the transition occurs, the less-centralized portions of the community will probably follow along and start working on the "Neo-Linux" thus produced.)
Its true that revolutionary, rather than evolutionary, change is probably best driven by a narrow committed group with a shared vision and the skills to realize than a disorganized community. But there is no barrier to that within Linux; and between the occasional revolutionary changes, the evolutionary changes that the community is very good at will still remain important. With open source, you don't have to choose: you can have a top-down narrow group working on revolutionary changes (you can have many of them working on different competing visions of revolutionary changes, which, given the risk involved in revolutionary change, is a good thing), all while the community at large continues plugging away on evolutionary changes to the base system—and if once one of the revolutionary variants attracts attention, begins working on evolutionary improvements to that, too.
ZFS definitely plays outside of normal layers (Score:4, Interesting)
On one hand, this gives it some serious advantages when run on Solaris 10. But it also makes it difficult to port. I wonder if that is partially responsible for delaying OS X Leopard?
Summary Blatant Lie, Encourages Flame War (Score:3, Informative)
The direct quote is "I've long seen the Linux community's inability to design, plan, and act in a holistic manner as its greatest weakness."
You can see the meaning has been completely changed in the summary from one of positive criticism to one of arrogant condemnation.
Through this change, we can see the posters true feelings, feelings that are shared by many in the Linux community. That is to respond immaturely and get all bent out of shape if somebody builds anything that doesn't follow the "Linux philosophy".
The Truth. Both Linux in general, and ZFS are amazing, and powerful tool. One of best philosophy I've encountered is "use the right tool for the job".
Nobody is forcing Linux devs to port ZFS, or even use, or even think about it. The only reason this is an issue, is because many in the Linux community realize how powerful ZFS is, and they're subconsciously pissed off that they can't have it. So they respond like a 3rd grade bully by attacking it in a self defeating attempt to minimize its importance.
Re: (Score:3, Interesting)
In need of an in-house Guru (Score:3, Insightful)
I even enjoy spending time tweaking my desktop computer, from back in the days when memory came in 16k chips, IRQs had to be tediously managed, and squeezing every drop out of 640k was fun. But try as I might I have yet to get a stable, visually appealing, or useful version of linux on any of my previous 3 computers. Why? Because I can't even get a minimally functional system running, and give up before I get to the tweaking stage.
Major problems I encountered which I spend more than 1/2 an hour working on each: picking a distro, much harder than you think for the non-initiate. KDE vs Gnome? Utterly crappy (ie Mac 6) video support without special do-it-get-it-complile-it yourself drivers. Can't install video drivers, I didn't install gcc (silly me). Can't install video drivers, I'm missing some contingencies. Can't install video drivers, I didn't install the source code for the kernel (silly me). Multiple conflicting versions of drivers and conflicting advice about which one to use. Multiple conflicting instructions on how to install said video drivers. Video driver installer has reams of text output, some of which are error messages. Based on more advice, appearantly these error messages may or may not be normal and may or may not be why I never got good video output. My sound card stopped working. I still don't know why.
Valuing my time at a paltry $50 an hour, I could have easily bought a newer better system with WinXP on it and then taken my wife out to dinner with the remainder.
If anyone can recommend a distro that will run, out-of-box, on my Dell e1505 with an ATI x1400 graphics card and Creative Audigy soundcard, then I promise you I will excitedly hunt it down and intall it, I really do want to switch to linux, the visuals I've seen other users have is incredible.
Unfortunately the fact that I have to ask such a question really shows how linux in general is completly unprepared for the desktop market. Prove me wrong and recommend a distro.
PS - please, no berating, calling-of-noob'ing, or general fun making at my expense. I really honestly do want help, and Linux people have tried to help me in these ways before. (they haven't proven helpful yet)
Has someone actually read about or used it ??! (Score:5, Informative)
Most of all, to me, I am astonished that almost everyone talks 'virtualisation', VM, QEMU, Xen.
When it comes to filesystems, suddenly many seem to want to do everything on their own, on physical platters: partition, volumes/RAID, format. ZFS is a virtual filesystem, where none of such is physically needed. There is a nice http://www.opensolaris.org/os/community/zfs/demos
Of course, filesystem should be a black box, an object, instead of the user having to do low-level work. ZFS provides this, and more relevant: of course it needs to be cross-layered therefore.
Snapshots ought to be available easily, at any moment in time, without taking much space. ZFS does so, by only storing the changes and sharing the unmodified data. If you want to do so, you need an abstraction of the hardware. That is, crossing layers. Not to mention writeable snapshots.
Adding new drives without partitioning, slicing, formatting. Just adding to the existing pool. Inclusive striping being adapted automagically. This needs a cross-layer interface, right ?
The transactional filesystem guarantees uncorrupted data at power failures and OS crashes. If you do this across a pool of physical platters, you need operations across layers.
There is an interesting blog on the usage of ZFS for home users. It contains some good arguments, why ZFS is useful for Linux' Desktop Stride. You find it here: http://uadmin.blogspot.com/2006/05/why-zfs-for-ho
Last ot least, the online checking of all your data ('scrubbing' and 'resilvering') is a valuable feature for Linux (and the home user) as well.
To me it looks like, as of today, that about everyone liked the features of ZFS. Now, as it requires to break some old habits, suddenly we resist change and rather stick to older concepts.
As if GPLv2 vs GPLv3 was not enough of a threat to Linux, now we unashamedly permit a new-from-the-bottom-up filesystem to overtake us as well ?
Re: (Score:2)
Re:What's ZFS? (Score:5, Informative)
ZFS is a file system developed by Sun over the past several years. But the important thing is, in this context, that the ZFS design philosophy (never mind the actual design, which isn't what this discussion is about) differs from that of ordinary file system design. Most file systems make strong assumptions about reliability of the underlying block storage facility: there's some gizmo down there, whether it be a disk (for itsy-bitsy systems), a RAID set (for not so bitsy systems), or a SAN, that reliably stores and retrieves blocks with reasonable performance. ZFS doesn't do this. It manages many details of the storage layers -- it does RAID its own way (to get around problems that conventional RAID doesn't solve), and does volume management itself as well.
From the point of view of a UNIX/Linux file system person, this seems very weird. However, these ideas are not really new or revolutionary (there are new things in ZFS, but this philosophy isn't one of them). It pretty much describes how network storage vendors (NetApp, EMC, etc) have been building things all along.
Re: (Score:2, Informative)
It has some really nice features that are either not in Linux filesystems or not well implemented in Linux filesystems. It's supported by Solaris, FreeBSD, OSX, and possibly some other operating systems, so it'd be handy if it also worked natively in Linux. It could be like FAT32 for people who need to share data between OSes and don't need Windows. Except unlike FAT, ZFS is actually well designed and has "modern" features.
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
You can run RHEL3 binaries in RHEL4 however. And you can happily run Linux 1.0 binaries on the latest linux development snapshot. Thats because Linux DOES have a stable ABI: The syscall interface. That's the REAL ABI the Linux kernel has to support, and it's the one that it's really guaranteed to be stable. What you think as an abi it's not an "abi", it's an INTERNAL ABI. Drivers are not "software built in top of the kernel", they're plugins. And Linux developers do not
Re: (Score:2)
fine piece of A/C trolling, but I'll shoot a few of your points down anyway
1/ you DON'T have to upgrade your kernel if you don't want to. for example, SUSE back-port bug fixes into the kernel release that each version of suse came with. this keeps updates quite safe. you CAN download a later version if you want, or even build your own. Debian is even more conservative.
2/ linux has goals/direction, but linux is more than a sum of its parts, so you might have to narrow your focus on the aspects of it which ar
Re:Linux isn't successful on the desktop because (Score:5, Insightful)
I can sit down in front of any computer and begin to figure it out. i wasn't taught windows, I learned about windows from windows. I learned about OS X from OS X. and I figured out how to make a custom kde setup from KDE.
You want to know what I find short comings in them all. They are tied to one group, one development process. I want an OS that has the ease of use of OS X, with the multi-platform binaries of java, and the remote windowing of X. I want to carry my home directory files on an encrypted thumb drive, and load up my files, whether or not the OS is OS X, linux, windows, solaris, plan 9, or what ever else the future may bring.
we have the knowledge and technology to do that today.
Re: (Score:2)
Re:Linux isn't successful on the desktop because (Score:5, Insightful)
Just because your grandma is a little slow (okay, ALOT SLOW) does not mean all of them are.
My grandmother WAS sat in front of an Ubuntu box for the first time, and after 5 minutes, she asked me why her windows PC did not have Desktop switching, as it only makes sense, rather than constantly minimizing countless windows. Since she already has Firefox on her PC, there was no great hunt for the Big Blue "E" aka "the internet", and after a short explanation about how she, as a user, has her own little piece of the computer called a HOME FOLDER, and can save all her stuff there, she was set.
I am so tired of this myth that only people with a Mensa I.Q. are capable of understanding how to use a non-windows based system. Granted, she wont be editing config files or writing code, but how many outside the IT industry do that on a regular basis?
Mod me insightful (or fraking obvious, take your pick)
Re:Linux isn't successful on the desktop because (Score:4, Insightful)
To address the parent:
1. Fonts are not something I even notice a difference in. I can't imagine anyone making a decision on this basis.
2. Linux is now just as easy to use as Windows for the average user. Many devices will be supported without installation of special drivers, and in many respects this experience is easier than windows. For example, my GPS device plugs straight in and works. To use it under Windows I have to keep installing a driver. Not just once but every time I use it. I don't know why. I don't know how to fix it on Windows.
3. Graphics issues - Desktops like Suse and Ubuntu are well integrated with consistent styles. While there is a broader range of layouts than with Windows, this is not a barrier to adoption.
4. Lack or help. I don't know of any software which has effective help; be that Windows or Linux. Linux has man pages of course, but thats too technical. I agree that documentation could be better, but popular applications are generally easy to use without detailed help. The lack of local help is not a big factor, and is mitigated by good online resources such as FAQ's and mailing lists.
5. This last one is odd. You want a "bundle of software that fits my needs". Linux may have been inspired by a philosophy, but there is no suggestion that users must share it. The fact is that under Linux you have access to a huge number of applications out of the box. Under Windows you will need to purchase software piece at a time. I would rather just be able to download a program automatically.
None of these reasons are real reason why Linux is not popular on the desktop. One real reason is gaming support - one of the primary reasons many of my associates say they still have Windows partitions. If only I could play CS on Linux....
Re: (Score:2)
2. Sit someone that's never used Windows in front of a Windows box. Same deal. Just because Linux has a lot of features that you don't immediately know how to use does not mean those features are a liability.
3. Flickery redraw of desktops? Have you even seen Linux? I honestly have no idea what you're referring to. My desktop is of the 3D
Re: (Score:3, Informative)
Of course I don't agree.
I'm doing a long term comparison test between Fedora Core 6, and my Knoppix remaster, [geocities.com], both installed on the same machine, a HP Pavilion 8250, maxed out on memory, and with a dual hard drive setup, one 2 GB for MSDOS to run my loadlin menus, and for GRUB in the MBR, and the main hard drive, a 160 GB for both linux installations to use.
My Knoppix remaster, Rapidweather Remaster of Knoppix Linux runs from a "tohd" partition, with a real
Re: (Score:3, Interesting)
Maybe. But just now I'm writing this on a Linux desktop while a Windows XP laptop is near left. I really can't apreciate any significative difference. Maybe I lack good taste or something, but I can say that no people that saw my Linux desktop (neither this one at home nor the one at work -basically the same) said "oh, those fonts are so ugly!"
"2. Ease of use. Nobody has sat first time users in front of a linux desktop and watch them puzzle over what tho
Re:Hey! (Score:5, Interesting)
But the thing to do is fix our broken implementation of layering and not be fooled into thinking that layers are bad. What is bad is exactly as the author here claims: it is bad to have no powerful capability to cross layer boundaries so that applications see a simple, powerful model instead of the current situation, where one's face is constantly rubbed in the minutae of layering administrivia. ZFS actually has layering, it just bypasses some traditional Unix subsystems and takes care of the functionality itself. But is wrong to conclude that this must therefore be the optimal approach just because it improves on the mess that preceded it. If ZFS internal interfaces are worth using, then they are worth using as core interfaces, not ZFS-only interfaces. Translated into Linux terms, the implication is that it is high time to get busy and rectify some of the serious deficiencies in our storage model. Not by mashing all the layers together, but by teaching them how to get along more efficiently and powerfully, and not be so layered that important things don't even work.
Note: perhaps the biggest design distinction between Linux and other Unixen is that, internally Linux is all just one big flat function space where anything can call anything else and share any data. This is said to be a reason why Linux is more efficient than, say, the Mach kernel with its microkernel layering. If being all one big hairball of functions is good for memory management, vfs, scheduling and so on, then why is it not also good for volume management? I don't know the answer to this, but I do know that we have plenty of bogus layering in our storage stack that has really slowed progress in recent years and needs a good dunging out. Any nonbogus layering can stay.
Re:Hey! (Score:5, Insightful)
You don't seem to understand snapshots.
A snapshot works by creating a copy of the device, with the contents it had when the snapshot was created. If you make a snapshot of
Why would you change anything over? Snapshots are temporary. You snapshot your drive, use the snapshot to create a consistent backup (or whatever), then destroy it.
Normally you won't keep a snapshot around for long, as they're maintained by keeping copies of modified blocks, and that takes space. Unless you have enough space for fully duplicating the device you made a snapshot of, you won't be able to keep it around forever.
Re:Hey! (Score:5, Informative)
I think you'll find that it is you that doesn't understand what a snapshot could be. Take a look at ZFS, try it, and see if you think of snapshots the same way again. In ZFS, a snapshot can be promoted to a clone, which is a writeable copy of the original filesystem, sharing unmodified blocks using a copy-on-write algorithm.
This is increadibly powerful and useful. For example, a single master 'image' volume can have customizations added for specific purposes. This is useful in desktop deployment, iSCSI or NFS network boot, etc...
Would you expect a 'first class' writeable clone to have a name like 'dev/mapper/snapshotted-hda' or 'dev/hda.1'? Which one makes more sense? Why would the original have a special name, when the clone is identical?
It's this kind of narrow 'snapshots are throwaway' thinking that causes artifical limitations in APIs and operating system design that serve no real purpose.
Re: (Score:3, Informative)
LVM has this already. CONFIG_DM_SNAPSHOT in the kernel config.
If you use LVM, then all devices you put a filesystem on are in /dev/mapper. My root is in /dev/data/root, /home is in /dev/data/home (or /dev/mapper/
Re:Hey! (Score:4, Informative)
If you say so
A snapshot works by creating a copy of the device, with the contents it had when the snapshot was created. If you make a snapshot of
Because with the incumbent volume management strategy you may not continue to use
Real complaints (Score:3, Interesting)
One legitimate complaint is the poor state of integrated RAID support in the linux LVM. Yes, th
Re: (Score:3, Insightful)
Re: (Score:2, Informative)
Re: (Score:2)
Re: (Score:3, Informative)
Most recently, it has been the package management. I have been all but forced to use the "commercial" RedHat up at work, and I still cannot believe that Redhat uses a lame package manager that requires you to "solve your own" dependencies.
They don't. Up2date resolves dependencies.
Redhat is another problem. rpm doesn't have the smarts to do anything for you. If you want any kind of 'immediate' commands, you have to 'yum' them. This isn't acceptable in a corporate environment.
Well, sure - but that's bec