Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Business

Does Linux "Fail To Think Across Layers?" 521

John Siracusa writes a brief article at Ars Technica pointing out an exchange between Andrew Morton, a lead developer of the Linux kernel, and a ZFS developer. Morton accused ZFS of being a "rampant layering violation." Siracusa states that this attitude of refusing to think holistically ("across layers") is responsible for all of the current failings of Linux — desktop adoption, user-friendliness, consumer software, and gaming. ZFS is effective because it crosses the lines set by conventional wisdom. Siracusa ultimately believes that the ability to achieve such a break is more likely to emerge within an authoritative, top-down organization than from a grass-roots, fractious community such as Linux.
This discussion has been archived. No new comments can be posted.

Does Linux "Fail To Think Across Layers?"

Comments Filter:
  • by Anonymous Coward on Saturday May 05, 2007 @05:00PM (#19004535)
    Linux and other open source projects are getting a harsh lesson in what it is like to ship consumer grade software products. No more RTFM! No more 'did you submit a bug report???' No more this bug/problem is not our fault since we don't control such and such library we use.

    Project vs Product

    Everyone is impressed with how far you've progressed when you are working on a project.

    Everyone is pissed off with how much you've left undone when you are working on a product.

    Welcome to reality open source developers. Before long you will all be saying "Damn, if I have to work this hard to make a consumer grade software product I might as well be getting paid to do so"

  • Re:Merit (Score:3, Insightful)

    by pionzypher ( 886253 ) on Saturday May 05, 2007 @05:08PM (#19004581)
    Agreed, but is that not also its achilles heel? Kernel space drivers have the ability of taking down the whole system where userland drivers do not.
  • Well (Score:3, Insightful)

    by diegocgteleline.es ( 653730 ) on Saturday May 05, 2007 @05:15PM (#19004633)
    It's not just Andrew Morton, it's basically every core linux kernel hacker that has spoken on the issue.

    It's pretty obvious; I don't think that even the ZFS developers will deny it. They'll just say "it's a layering that was worth breaking".
  • by What Is Dot ( 792062 ) on Saturday May 05, 2007 @05:18PM (#19004655)
    I totally agree. I think the main problem with Linux based systems (Fedora, Ubuntu, etc.) is that there are so many of them. Diversity is wonderful for free speech, but in the open source community, we have 100 solutions for every 1 problem.
    The best solution would be for the Linux Kernel project to say, "Open source developers can do as they please, but we here at the Kernel project encourage developers to contribute to THESE specific projects: Gnome, Open Office, etc...
    The open source community is massive, but development will take an eternity until a majority of the community starts to support ONE software solution over it's alternatives.
  • by Toby_Tyke ( 797359 ) on Saturday May 05, 2007 @05:26PM (#19004723) Journal
    . How is targeting a moving sucky platform preferable to one that is open?

    The moving sucky one has ninety plus percent of the home desktop market. Linux has less than one percent, and I've never seen any credible figures suggesting otherwise. Why target a tiny niche market when you can target a huge one?

    And bear in mind, the proportion of linux users who are serious about gaming and do not have access to a windows machine is probably one percent of Linux users. So even if you target windows, ninety nine percent of Linux gamers can play your games anyway.
  • Re:What's ZFS? (Score:3, Insightful)

    by jonnythan ( 79727 ) on Saturday May 05, 2007 @05:26PM (#19004727)
    I think you need to read the article you linked to, because ZFS is very very different from ReiserFS and ext3.
  • by krbvroc1 ( 725200 ) on Saturday May 05, 2007 @05:30PM (#19004759)
    Layers are both easier to code, to understand, and to test. Layers/boundaries between software are your friend. To some degree that is why the Internet, based upon a layered network model (TCP on top of IP on top of Ethernet) is so diverse.

    Layering is what keeps things manageable. One you start getting your software tentacles into several layers you make a mess of things for both yourself and others. Its a tradeoff--complexity/speed vs simplicity/maintainability/interoperability.
  • Hard to dis (Score:0, Insightful)

    by Anonymous Coward on Saturday May 05, 2007 @05:31PM (#19004773)
    It's really hard to argue with the claim that Linux is a fundamentally flawed failure of an operating system. It's a nice free tech toy, sure, but when it comes to being an accepted and realistic product, there are a great many reasons to look elsewhere.

    Efforts like Ubuntu, while admirable, are really just polishing a turd. No matter how much paint you slap on it, it's basic nature will not change.

    Someone has to come up with an alternative to Lunis's "work on the kernel and let everything else go to hell" development strategy. Redhat or SUSE could have done so much, but ultimately failed once they started pandering to "the community".
  • Re:Merit (Score:5, Insightful)

    by FooAtWFU ( 699187 ) on Saturday May 05, 2007 @05:33PM (#19004801) Homepage
    Indeed. Whatever downsides layers have, they keep things sane. If you're going to make a mess of things, at least with layers you have an organized mess. There's a reason that Linux is more secure than Windows.
  • by Mad Quacker ( 3327 ) on Saturday May 05, 2007 @05:34PM (#19004815) Homepage
    Open source software gets better because new people want new features to which they contribute. You can't blame Andrew Morton for disliking what ZFS is going to do, this is just how people work. This is why they say you can't teach an old dog new tricks.

    That said ZFS is one of the coolest things to happen to your files in a long time. The current disk block device usage is basically the same from the beginning of computing, it is ancient and actually quite stupid. Over decades layers keep getting added to it to make it more robust, but really it's a monstrosity. Partitions are dumb, LVM is dumb, disk block RAID is dumb, monolithic filesystems are dumb. All the current linux filesystems should be thrown out.

    I don't want to care how big my partitions are, what level parity protection my disks have, or any of that junk. I want to add or remove storage hardware whenever I want, and I want my files bit-exact, and I want to choose at will for each file what the speed vs protection from hardware failure is. Why shouldn't one file be mirrored, the next be stripped, and the next have parity or double parity protection? Why can't very, very important files have two or three mirrors?

    From the current status of ZFS however I think this could be quickly built using GPL 2+ by one or two determined people, and it would involve gutting the linux file systems.
  • by Cyberax ( 705495 ) on Saturday May 05, 2007 @05:35PM (#19004829)
    Yes, and dollar is not a currency, it's a banknote.

    Representative republic is JUST A FORM OF DEMOCRATIC GOVERNMENT.
  • by peragrin ( 659227 ) on Saturday May 05, 2007 @05:37PM (#19004863)
    I have spent the last three days teaching someone how to use windows XP when all they used to use was windows 98. Every interface is different. Stop teaching interfaces and start teaching ideas. Stop teaching MSFT word, start teaching word processing. Teach spreadsheets not excel.

    I can sit down in front of any computer and begin to figure it out. i wasn't taught windows, I learned about windows from windows. I learned about OS X from OS X. and I figured out how to make a custom kde setup from KDE.

    You want to know what I find short comings in them all. They are tied to one group, one development process. I want an OS that has the ease of use of OS X, with the multi-platform binaries of java, and the remote windowing of X. I want to carry my home directory files on an encrypted thumb drive, and load up my files, whether or not the OS is OS X, linux, windows, solaris, plan 9, or what ever else the future may bring.

    we have the knowledge and technology to do that today.
  • by howlingmadhowie ( 943150 ) on Saturday May 05, 2007 @05:42PM (#19004897)
    Linux and other open source projects are getting a harsh lesson in what it is like to ship consumer grade software products.

    um, you do know that linux has been the operating system of choice for supercomputers, webservers, special effects production, scientific computing etc. for a number of years now, don't you? because you seem to think that linux, freebsd, openbsd or whatever just suddenly turned up yesterday or something. are you also aware of the fact that a lot of people who write free and open-source software get paid good money to do so?
  • Re:Total bullshit (Score:3, Insightful)

    by Sj0 ( 472011 ) on Saturday May 05, 2007 @05:50PM (#19004997) Journal
    One thing I noticed from your post is that the Windows versions of all those games still run too. I wonder how much of the problem is changing versions of Windows, and how much is just hackish code some developers write?
  • by cyber-vandal ( 148830 ) on Saturday May 05, 2007 @05:50PM (#19004999) Homepage
    Socialism worked pretty well in the democratic western countries, that's why people aren't dying of cholera/typhoid/starvation in slums anymore.
  • by Anonymous Coward on Saturday May 05, 2007 @05:57PM (#19005067)
    "Ease of use. Nobody has sat first time users in front of a linux desktop and watch them puzzle over what those multiple desktops do, or how to switch between them.......If there is one thing I would suggest, get Ubuntu played with by ordinary grandma's so you can see how they get confused."

    Just because your grandma is a little slow (okay, ALOT SLOW) does not mean all of them are.

    My grandmother WAS sat in front of an Ubuntu box for the first time, and after 5 minutes, she asked me why her windows PC did not have Desktop switching, as it only makes sense, rather than constantly minimizing countless windows. Since she already has Firefox on her PC, there was no great hunt for the Big Blue "E" aka "the internet", and after a short explanation about how she, as a user, has her own little piece of the computer called a HOME FOLDER, and can save all her stuff there, she was set.

    I am so tired of this myth that only people with a Mensa I.Q. are capable of understanding how to use a non-windows based system. Granted, she wont be editing config files or writing code, but how many outside the IT industry do that on a regular basis?

    Mod me insightful (or fraking obvious, take your pick)
  • That's fine (Score:5, Insightful)

    by Sycraft-fu ( 314770 ) on Saturday May 05, 2007 @05:57PM (#19005071)
    But the OSI layers are guidelines that help design things, not rigid levels that must be maintained. They are mixed up ALL the time. As a simple example, see Layer-3 switches. These are devices that simultaneously work at Layer 2 and 3 when it comes to dealing with traffic. They break down the traditional view of a separate router and switch, and they are good at what they do. There's plenty of stuff at the high end that's similar. Many things that are part of the presentation layer are actually done by the application (like encryption in SSH) and so on.

    There's nothing wrong with having a layered design philosophy as it can help people decide what their product needs to do, and what it needs to talk to. For example if I am designing an application that works over TCP/IP, I really don't need to worry about anything under layer 4 or 5. However it shouldn't be this rigid thing that each layer must remain separate, and anything that combines them is bad. I don't need to, and shouldn't, take the idea that my app can't do anything that would technically be Layer 6 itself. Likewise in other situations I might find that TCP just doesn't work and I need to use UDP instead, but still have a session which I handle in my own code (games often do this).

    Had we stuck to the OSI model as a maximum, rather than a guiding principle, with the Internet, it probably wouldn't have scaled to the point we have now.
  • by Kjella ( 173770 ) on Saturday May 05, 2007 @06:02PM (#19005117) Homepage
    Which is why America is a Representative Republic and NOT a Democracy.
    With Democracies, you end up with the tyranny of the majority, regardless of whether the minority opinion is the correct one. Under a Republic form, a large enough minority can plug up the works and force negotiation with the majority before a final solution is agreed upon.


    Says the only two-party state I know of. Whichever party has 52% this term screws over the other 48% without flinching. If you wanted negotiation, you should look to Europe where we have many smaller parties, shifting coalitions trying to match the will of the people on a case by case basis, not just heads or tails every five years or so.

    The Linux Development community needs representative decision making, there are too many voters, hence, almost no direction or real progress towards a cohesive goal. Nothing will change without true leadership, and sadly, accountability.

    You cant measure progress without accountability for failure.


    You can't rule volunteers by force or majority. 99% of the Linux developers may agree, but I can refuse to in that direction or even work to pull it in a different direction. You can't be held accountable for people unless you can control what they're doing or not.

    Socialism has not worked in ANY form, and it wont work for Linux either.

    Socialism hasn't worked for any finite resource. If in a socialist country everybody refused to grow food, people would starve. If everyone refused to develop Linux, it would simply come to a halt in its current condition. Everyone would still have "as much" Linux as they want. That was the downfall of communism, they had to force people to keep the wheels turning but Linux doesn't.

    If by socialism you mean that "everyone contributes what they want", then it seems to be it's working quite nicely already. Depending on the metrics you use maybe not in the lead, but certainly better than many other OSs from the 70s, 80s and 90s. So well, if you say it can't work abd I see that it does work, I tend to go with reality.
  • by Anonymous Coward on Saturday May 05, 2007 @06:07PM (#19005171)
    "I mean, although ZFS is a rampant layering violation and we can do a lot of
      the things in there (without doing it all in the fs!) I don't think we can
      do all of it." http://lkml.org/lkml/2006/6/9/409 [lkml.org]

    It sounds like his main point was pointing out problems with the current file system, rather than saying ZFS is bad. I bet he simply thinks they should try to implement a much better file system than ext3 without breaking the current layering scheme. I don't see why this is so bad. Why not try it, and if it fails miserably, ZFS is already here.

    I think the author of the article took everything out of context and was just looking for some ammo against Linux. His blog post sucked. He just says the same crap that everyone always says. I'm not saying there are no problems, but I don't see how any of the problems relate to Andrew Morton saying the Linux file systems need to be upgraded/replaced.
  • by Hairy1 ( 180056 ) on Saturday May 05, 2007 @06:08PM (#19005195) Homepage
    The real reason Linux isn't popular has more to do with marketing that technical issues. People have always pointed to one feature or another where Linux is weak and say that it won't be viable until the feature is there. The simple fact is that Linux is now ready for the desktop technically. It is the marketing which Linux and more generally open source needs to perfect.

    To address the parent:

    1. Fonts are not something I even notice a difference in. I can't imagine anyone making a decision on this basis.

    2. Linux is now just as easy to use as Windows for the average user. Many devices will be supported without installation of special drivers, and in many respects this experience is easier than windows. For example, my GPS device plugs straight in and works. To use it under Windows I have to keep installing a driver. Not just once but every time I use it. I don't know why. I don't know how to fix it on Windows.

    3. Graphics issues - Desktops like Suse and Ubuntu are well integrated with consistent styles. While there is a broader range of layouts than with Windows, this is not a barrier to adoption.

    4. Lack or help. I don't know of any software which has effective help; be that Windows or Linux. Linux has man pages of course, but thats too technical. I agree that documentation could be better, but popular applications are generally easy to use without detailed help. The lack of local help is not a big factor, and is mitigated by good online resources such as FAQ's and mailing lists.

    5. This last one is odd. You want a "bundle of software that fits my needs". Linux may have been inspired by a philosophy, but there is no suggestion that users must share it. The fact is that under Linux you have access to a huge number of applications out of the box. Under Windows you will need to purchase software piece at a time. I would rather just be able to download a program automatically.

    None of these reasons are real reason why Linux is not popular on the desktop. One real reason is gaming support - one of the primary reasons many of my associates say they still have Windows partitions. If only I could play CS on Linux....
  • by DragonWriter ( 970822 ) on Saturday May 05, 2007 @06:28PM (#19005407)

    Siracusa ultimately believes that the ability to achieve such a break is more likely to emerge within an authoritative, top-down organization than from a grass-roots, fractious community such as Linux.


    Nothing stops an "authoritative, top-down organization" from taking all the open-source work done on Linux, and applying its own methodology to driving it forward; if that's more effective than what everyone else in the Linux community is doing, users will be more interested in adopting what they do with it (and, heck, once the transition occurs, the less-centralized portions of the community will probably follow along and start working on the "Neo-Linux" thus produced.)

    Its true that revolutionary, rather than evolutionary, change is probably best driven by a narrow committed group with a shared vision and the skills to realize than a disorganized community. But there is no barrier to that within Linux; and between the occasional revolutionary changes, the evolutionary changes that the community is very good at will still remain important. With open source, you don't have to choose: you can have a top-down narrow group working on revolutionary changes (you can have many of them working on different competing visions of revolutionary changes, which, given the risk involved in revolutionary change, is a good thing), all while the community at large continues plugging away on evolutionary changes to the base system—and if once one of the revolutionary variants attracts attention, begins working on evolutionary improvements to that, too.
  • Re:Well, no. (Score:5, Insightful)

    by KagatoLNX ( 141673 ) <kagato@@@souja...net> on Saturday May 05, 2007 @06:49PM (#19005625) Homepage
    The word "refactoring" applies here.

    When the layers don't meet your needs, you have two options.

    You can either violate the layering or you can get the layers refactored.

    In Linux, we do not accept the first. Why? Because it generates bad software...period.

    Writing drivers for MacOSX is a pain...because of the mingling between Mach, BSD, and everything else they did to make it work.

    Drivers for Windows has always been a source of instability because there isn't good layering there either. Try to write database code on Windows, the lack of coherent design presents dozens of incompatible interfaces with different features.

    You can do what these people do. You can make a "product" that "works" without regard to design. Eventually, you end up doing a complete rewrite. The fact of the matter is that Linus puts design before function, and maintainability before progress. As such, we move slow, we refactor, and we're generally slow. However, progress is steady and it does, generally, get better. Of course there are always people that want it to be everything.
  • Re:Hey! (Score:5, Insightful)

    by DaleGlass ( 1068434 ) on Saturday May 05, 2007 @07:17PM (#19005891) Homepage

    Why can't I say "/dev/hda, you are now snapshotted, shazam"? No, instead I have change my system over to use /dev/mapper/snapshotted-hda


    You don't seem to understand snapshots.

    A snapshot works by creating a copy of the device, with the contents it had when the snapshot was created. If you make a snapshot of /dev/hda at 12:15, then you'll get /dev/mapper/snapshotted-hda as it was at 12:15, while /dev/hda will continue being possible to modify.

    Why would you change anything over? Snapshots are temporary. You snapshot your drive, use the snapshot to create a consistent backup (or whatever), then destroy it.

    Normally you won't keep a snapshot around for long, as they're maintained by keeping copies of modified blocks, and that takes space. Unless you have enough space for fully duplicating the device you made a snapshot of, you won't be able to keep it around forever.
  • Maybe I'm wrong... (Score:2, Insightful)

    by FliesLikeABrick ( 943848 ) <ryan@u13.net> on Saturday May 05, 2007 @07:17PM (#19005893)
    Maybe my entire view of things is wrong, but isn't strict layering responsible for most of the things that set Linux apart from less-efficient and less-secure operating systems? Isn't layering what allows the same exact operating system be a high-performance server or a normal work station? I point out the latter because without layering, things like Windows can "disable" a lot of things that are all tied together, but never really remove many things completely since ... whatever.

    I guess what I'm saying/asking can be summarized: isn't layering one of those really-important things to Linux that shouldn't be violated? Reading the summary sounds like "In order to make Linux more Windows-like, we need to violate the things that makes Linux much more of a proper layered OS than Windows is."
  • by Peaker ( 72084 ) <gnupeaker@nOSPAM.yahoo.com> on Saturday May 05, 2007 @07:35PM (#19006033) Homepage
    The Windows installer also talks about partitions. It asks you to [c]reate, [r]emove, or format a partition.

    It actually offers more options with more terminology than the Ubuntu/Kubuntu installer.

    The Ubuntu installer offers you install options:
    "Simple - use free space"
    "Simple - overwrite whole disk"
    "Advanced - Setup your own partition table"

    Ofcourse most users can choose one of the simple options. The advanced one has a nice GUI to resize partitions and basically do everything from a GUI.
    In Windows its a bit more complicated than that, as explained above.

    So nice try, but even partition-wise, Ubuntu/Kubuntu have easier installers.
  • Re:Total bullshit (Score:3, Insightful)

    by Kjella ( 173770 ) on Saturday May 05, 2007 @07:40PM (#19006059) Homepage
    Linux will "support gaming" once games are supported for Linux. Linux has OpenGL, OpenAL, all the illusionary walls are market-made.

    Looking at the games I play in Windows, almost every one of them is using DirectX. Now, I am not qualified to know why but that is a fact. That means that to use OpenGL/OpenAL under Linux you either:
    a) Develop a Linux-only game
    b) Develop using your second choice on your primary platform
    c) Develop two code paths

    The first one is just not doable if say the Linux market is 10% of the Windows market then you'd have to sell it at 10x the price to make as much money. Nobody would pay $500 for a Linux game that costs $50 on Windows. The second should scream at you "make sure you accomplish your primaries". Staffing will be harder, the tools are less used and so less tested and there's the risk you'll never get around to testing the Linux platform making it all for naught. Finally you have the last option, but then you're not only using the second-choice tool, but it has secondary priority in your project as well, which is never good because you might end up with a crappy second-rate experience which arrives late, if it doesn't get cut in a budget/schedule crunch. Yes, these are all market-made walls but they're by no means illusory.

    Even XP needs tweaks to run Win9X games. How is targeting a moving sucky platform preferable to one that is open?

    If by moving you mean that almost ten year old games play with tweaks, then your idea of moving must be anything outpacing a glacier. The market is that way because the market wants it that way. People want a game, they play it, go tired of it and move on. They're not willing to pay for support of old games, they're not willing to pay so their kids and grandkids can play the same games they once used to. Games have been a constantly moving target in terms of hardware etc. all the time, it's not like a stable OS would help with constantly new hardware acceleration. In other words, you've offered no reasons why Linux is better in any way for companies looking to make a Linux game.

    In fact, there's plenty reasons that people wouldn't buy Linux games. Either they have a Windows machine, they're not willing to pay for it because it's not free as in beer, they're not willing to pay for it because it's not free as in speech, they don't want to pollute their kernel with closed-source drivers or whatever. And if you're looking for showcases that Linux can have great games, well I recently looked at most of those on the top 10 free linux games [techgage.com] from september last year and I was very underwhelmed.

    Let's start at
    #10 KMahjongg: "The game is not pretty to look at... very circa 1997, but it has the core elements you've come to expect with the game of Mahjongg in general."
    ##8 Armagetron Advanced: "Who doesn't love Tron? Admit it, you do. Tron is a prime example of killer gameplay that doesn't need superb graphics to be fun."
    ##6 Pingus: "The game looks great, it really does. It has a fixed resolution of 800*600 though, which I found to be way to small for this game." ...and it stops after tutorial island even though you can download custom levels.
    ##5 Neverball: "You are however, able to edit the configuration file to your liking. It would have been nice to have this accomplished through in-game options though."
    ##2 Frozen bubble: "I am impressed with the game as a whole, but wish that you could adjust your resolution. It's set to 640*480, so it doesn't look at that sharp."

    There's a few that are quite decent, but seriously.... most of them look like Windows games did in the 90s. If that's the effect of having a stable, open platform I'd rather take my Windows games any day.
  • by Alwin Henseler ( 640539 ) on Saturday May 05, 2007 @07:45PM (#19006103)

    You hit the nail on the head: 'something being worked on' (project) is NOT the same as 'something ready for use' (product). Note to mods: parent deserves credit for making this point.

    However, there is a large overlap between 'project' and 'product'. In-development-systems can be quite useful, and when are products really 'done'? Look around in your average household store. Many simple items (like paperclips) may have evolved, but their basic form is the same as the 1st day they where conceived. Read: the very first incarnation of a paperclip may have been rude, but no less useful than the matured, modern-day version. For complex products: try and find one where every aspect of its use, from production, marketing, distribution, everyday use, to how it is discarded/recycled, is well-thought through and 'just works'. Good luck.

    Basically: things that are both complex and well-engineered (in every aspect) are rare. 'Project' or 'Product' is not the same, but only a label. Which one applies, depends on your point of view.

    Take some projects, add packaging, marketing, support, ask money for the whole, and voila: a product. See IBM, Red Hat, SUSE for example. Whether a Linux kernel or a Perl binary shipped by them is a project or a product, only depends on how you use it. If you're a developer working on it, it's your project. When you're selling DVD's that include these in bulk, the same thing becomes a product.

    I've used Linux for different purposes starting around '94/'95 (normal desktop use these days, Windows is history for me), and I can assure you: it far exceeds what one might call 'consumer grade product'. That Linux has maybe around 1% market share among ordinary PC users, has nothing to do with quality or technical limitations, but everything with marketing, industry inertia and historical reasons.

  • Re:Total bullshit (Score:5, Insightful)

    by strider44 ( 650833 ) on Saturday May 05, 2007 @08:46PM (#19006543)
    Why do you need to develop a linux-only game if you use an opengl/openal code path? OpenGL and OpenAL run fine on Windows, and if you use something like SDL or GLUT then you can even compile the exact same 3D graphics code on Windows and Linux.
  • by try_anything ( 880404 ) on Saturday May 05, 2007 @09:14PM (#19006699)
    I like your title, and I like Linux's approach. The usual approach, especially in commercial software, is to give the users everything they want, no matter how many technical assets need to be sacrificed. Throw away good design, security, and reliability just to get the latest bullet point. That isn't what the Linux community wants. They want things done right, even if it takes a few years to figure out how. Windows is a good example of saying "yes" by default to every demand and ending up with a system so complex and fragile that even gurus don't understand it. (I'm thinking of the recent Raymond Chen blog entry that hit Slashdot, in which he admitted to releasing a bogus Windows patch because he didn't understand how Windows processes shut down.)
  • by try_anything ( 880404 ) on Saturday May 05, 2007 @09:30PM (#19006797)
    When your design constrains you, you don't throw it away and proceed into anarchy. You create a new design. If the layering scheme kills performance, the fix is to create a new layering scheme that accomodates the performance improvements you want to make.

    Having a coherent design is what allows people to reason about the system as a whole. Breach the design, and suddenly nobody can say anything about anything without tracking down and understanding all of the code involved. Commercial companies do this all the time when playing catch-up with rivals, because they have to retain their customers at all costs, but they suffer terribly for it in maintenance costs and stability. There's no reason in this case for Linux to take the fast, self-destructive route. Linux can wait for a coherent solution, even if it is years coming.

  • by geekyMD ( 812672 ) on Saturday May 05, 2007 @09:45PM (#19006887)
    I'd love to used Linux as a desktop. I just need a little help.

    I even enjoy spending time tweaking my desktop computer, from back in the days when memory came in 16k chips, IRQs had to be tediously managed, and squeezing every drop out of 640k was fun. But try as I might I have yet to get a stable, visually appealing, or useful version of linux on any of my previous 3 computers. Why? Because I can't even get a minimally functional system running, and give up before I get to the tweaking stage.

    Major problems I encountered which I spend more than 1/2 an hour working on each: picking a distro, much harder than you think for the non-initiate. KDE vs Gnome? Utterly crappy (ie Mac 6) video support without special do-it-get-it-complile-it yourself drivers. Can't install video drivers, I didn't install gcc (silly me). Can't install video drivers, I'm missing some contingencies. Can't install video drivers, I didn't install the source code for the kernel (silly me). Multiple conflicting versions of drivers and conflicting advice about which one to use. Multiple conflicting instructions on how to install said video drivers. Video driver installer has reams of text output, some of which are error messages. Based on more advice, appearantly these error messages may or may not be normal and may or may not be why I never got good video output. My sound card stopped working. I still don't know why.

    Valuing my time at a paltry $50 an hour, I could have easily bought a newer better system with WinXP on it and then taken my wife out to dinner with the remainder.

    If anyone can recommend a distro that will run, out-of-box, on my Dell e1505 with an ATI x1400 graphics card and Creative Audigy soundcard, then I promise you I will excitedly hunt it down and intall it, I really do want to switch to linux, the visuals I've seen other users have is incredible.

    Unfortunately the fact that I have to ask such a question really shows how linux in general is completly unprepared for the desktop market. Prove me wrong and recommend a distro.

    PS - please, no berating, calling-of-noob'ing, or general fun making at my expense. I really honestly do want help, and Linux people have tried to help me in these ways before. (they haven't proven helpful yet)
  • by Rakishi ( 759894 ) on Sunday May 06, 2007 @12:23AM (#19007677)

    They use an interface that literally emulates an ancient teletype.
    Thank god for that, doing my job would be a real pain in the ass otherwise. I mean christ, if it weren't for piping alone I'd probably spend 10 times as much time (if not more) coding some of the things I do.
  • Re:Hey! (Score:3, Insightful)

    by einhverfr ( 238914 ) <chris...travers@@@gmail...com> on Sunday May 06, 2007 @12:34AM (#19007755) Homepage Journal
    No, it is not stupid. You seem to be suggesting that snapshotting should be something that can be done on an arbitrary block device. To do this, you would have to have this supported in the block device handling, and this could be *very dangerous* in some environments (GFS, for example, or other DLM-based clustered filesystems) and misleadingly useless in others (snapshot one volume in a RAID 5 array). To make this work, I think it is best to handle snapshotting on the bottom-half of the filesystem (the inode layer->block device interface). However, there is some use in having it handled by the LVM (in that it allows you snapshot logical block devices rather than filesystems).

    I am not saying that the current system is perfect (no system is, and one should always strive for improvements), but I don't think doing snapshots of arbitrary block devices is a good way to handle it.

    My list of things to improve include:
    1) Change the /dev/ location of LVM volumes to /dev/lvm/... Makes for less typing and more transparency. Makes the entire picture both cleaner and clearer.
    2) Make sure that *all* the newbie Linux documentation covers LVM.
  • by flyingfsck ( 986395 ) on Sunday May 06, 2007 @02:12AM (#19008201)
    Many people complain about there being 'too many Linux distributions'. The fact is, that there is only one GNU/Linux. The distributions merely provide a tool kit that goes along with it, the underlying systems are all the same.
  • Re:That's fine (Score:4, Insightful)

    by einhverfr ( 238914 ) <chris...travers@@@gmail...com> on Sunday May 06, 2007 @02:26AM (#19008255) Homepage Journal
    Evidently you have never really followed the development of OSI. The only reason they are taught as "guidelines" today is that the OSI stacks that intended to implement it fell flat because they were horribly bloated monstrosities which solved all the wrong problems in all the wrong ways and took too long to get to market. On the other hand, the TCP/IP 4-layer model is actually pretty strict (as the OSI model was designed to be).

    Just to provide some context, the OSI initiative was an attempt by the UN ITU and other bodies to create an ultimate convergence network capable of adequately handling data and voice across the same physical links. Many of the layers in the OSI protocol diagram (such as the data link layer) are designed to merge circuit-switched and packet-switched paradigms. The idea was that if you can provide the flexibility to create virtual circuits for voice traffic and still handle packets with the remaining bandwidth, you would not need separate network access points for your internet and voice traffic. Many of the OSI protocols (such as H.323) assume that such virtual circtuits are available which is why they are so cumbersome over TCP/IP.

    I personally think that the OSI board designed the wrong kind of network for the wrong kind of problems. It is better to have a TCP/IP model, perhaps multiplexed with voice over ATM than to have intimate integration between such fundamentally different services. I also think that if people are going to teach the OSI model, they need to also teach the OSI design goals and those protocols which are still based on it: X.400, X.500 (and LDAP, which is basically X.500 over TCP/IP), X.509 (and hence SSL), H.323, T.120, and ASN.1.

    Most of the time, when people start getting experience with these protocols they run screaming from anything OSI ;-)
  • by SQLz ( 564901 ) on Sunday May 06, 2007 @02:50AM (#19008315) Homepage Journal
    That thing that just flew over your head, that was the entire point of open source and the GPL, you just missed it. We want 100 different solutions to every problem, not forever mind you, but all the code is GPL and belongs to the community. Say out of those 100 different solutions, 5 projects have awesome code and the rest are not great, incomplete, etc. Its a win win either way. First the other 95 people learned something, and at some point, the things they write wont' be crap, plus they gave back to the community. They might some areas of there code that can be assimilated into other projects later on. Out of the 5 good code bases, only 1 or 2 will generally prevail. This is why MS can't stop open source. They can't pay 100 people to solve the same problem then thousands of beta testers decide which is best.
  • by l3v1 ( 787564 ) on Sunday May 06, 2007 @03:43AM (#19008521)
    I can understand that in certain cases "layering violations" are bad, but Linux kernel developers don't even seem to be willing to experiment or think outside the box at all

    Such comments are just simply wierd. You people seem to think everybody is a genius but the linux kernel devs. They are the ones who can't think otherwise, they have the fault of following rigid rules, the are to be blamed that wonderful innovations don't follow the rules, they should think outside of the box and the rest don't even bother to try thinking inside the box. Damn, how could that happen, only the dumb devs became kernel devs ? Geez, get a grip.

  • by Phleg ( 523632 ) <stephen AT touset DOT org> on Sunday May 06, 2007 @07:18AM (#19009233)
    What the fuck? Why should the kernel developers be responsible for picking which desktop environment, mail client, and office suite I use? You might as well have architects tell you what kind of company you can put in their building.
  • by Anonymous Brave Guy ( 457657 ) on Sunday May 06, 2007 @02:47PM (#19012037)

    Sorry, but I can't agree with your reasoning. To explain, let met set out a few realities of software development, as I've personally come to see them after some years as a developer:

    1. Code reuse is mostly a myth. One significant exception is libraries/components — code specifically designed to be modular, generic, self-contained and open to reuse, and typically on a relatively large scale. Apart from that, it is much more common to reuse design concepts, interface standards and the like, rather than code itself. Reuse of isolated code fragments from one project in another is very rare.
    2. Choice is not always a good thing. The simplest reason for this is that if you only need one product to do a job, then at the time you're doing that particular job, having one good product available is worth more than a choice among multiple inferior products. And of course, if you have finite development resources, they can get more done in combination than in competition, other things being equal.
    3. The first attempt at a complex project never gets everything right. It might be a good design, but it won't be the best. It won't anticipate future changes to the ideal degree for optimal development over the lifetime of the project. In general, the decisions won't be as good as they would with hindsight. Prototyping work often gives a very high return on investment. However, development processes that emphasise working only to current requirements with limited or no future planning do not scale to larger projects and tend to collapse under their own weight after a while.

    Now, these are obviously blanket statements, and no doubt there are valid exceptions to each rule. Indeed, the rules themselves seem almost contradictory, though I think that is just an illusion: what they really tell us is that balance is required, both in the degree of concentration of effort vs. parallel development, and in the degree of future planning and generality vs. the efficiency of developing what will actually be used in the end.

    Now, take a look again at the argument you gave in the parent post.

    If you have 100 different ideas for ways to solve a problem, then go ahead and develop 100 different solutions to see which work best. But in reality, do you really, or are there only really a relatively small number of solutions, but many implementations of each with few real differences?

    OK, let's assume there really were 100 unique approaches, and that in practice 5 of them turn out to work best. Did the other 95 learn something developing their failed alternatives? Perhaps, but they might have learned something helping to develop the successful alternatives as well. After all, if so few of the ideas really worked out, would it not be better to study those and the ideas and techniques of the people behind them?

    You also suggested that the 95 would have given something back to the community. But would what they gave back have much value? Again, it seems unlikely that if 5 different solutions were all viable, the other 95 would really offer many further ideas in terms of high level design or general approach. And again, if they did, perhaps those same good ideas could have made the better solutions even better still if the development teams had compared notes earlier, giving a best-of-both result. The 95 might also have some neat implementation tricks, but as I suggested above, in reality it seems very rare for one project to borrow code from another in this way unless the code in question was specifically developed as a self-contained library, in which case it's not really specific to one of the 95 failed attempts anyway.

    It's not hard to see that a small number of ideas, perhaps 2, perhaps 5, might offer genuine pros and cons, and with a concentration of development effort they might be able to learn from each other's experience and systematically share code for common functionality so that all benefit. But really, I don't see this happening at a micro level. It has to be mac

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...