Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Kernel Changes Draw Concern 685

Saeed al-Sahaf writes "Is the Linux kernel becoming fat and unstable? Computer Associates seems to think so. Sam Greenblatt, a senior vice president at Computer Associates, said the kernel is 'getting fatter. We are not interested in the game drivers and music drivers that are being added to the kernel. We are interested in a more stable kernel.' There continues to be a huge debate over what technology to fold into the Linux kernel, and Andrew Morton, the current maintainer of the Linux 2.6 kernel, expands on these subjects in this article at eWeek."
This discussion has been archived. No new comments can be posted.

Kernel Changes Draw Concern

Comments Filter:
  • Just my $0.02 (Score:5, Interesting)

    by maotx ( 765127 ) <maotx@yCOWahoo.com minus herbivore> on Tuesday April 19, 2005 @07:15PM (#12287713)
    Members of the open-source community are expressing concern over rapid feature changes in the Linux 2.6 kernel, which they say are too focused on the desktop and could make the kernel too large.
    "We are not interested in the game drivers and music drivers that are being added to the kernel. We are interested in a more stable kernel."


    If you don't want it, don't compile it in. Thats the best part about having the kernel opened and so easy to manipulate. With the GUI available for modifying the kernel as well as a detailed set of instructions built right in, anyone can sit there and remove support for the latest gaming joystick if they so choose to. No one is making you keep it. If the kernel didn't have the option of supporting it, or if they discontinue the building of, then Linux will never be ready for the desktop. Just because Morton or Linus decides to add/accept support for the desktop community doesn't mean that the kernel won't be any more stable. Who is to say that adding gaming support took time away from stabilizing the kernel? If I'm strictly a game hardware designer and send my contribution to support the latest device does not mean that I could have spent my time improving the kernel. I may not be comfortable doing that. In other words, maybe I can't stabilize the kernel but I can write new drivers for it. And if I spend my time doing that it doesn't mean that I take time away from those improving and stabilizing the kernel.

    The part that really caught me off guard is the inclusion of the Xen virtualization technology. [crn.com] Big changes are coming to the kernel that are really going to improve Linux and its functionality in the buisness and home world.
    • Re:Just my $0.02 (Score:5, Insightful)

      by rgmoore ( 133276 ) * <glandauer@charter.net> on Tuesday April 19, 2005 @07:23PM (#12287779) Homepage
      If you don't want it, don't compile it in.

      Which is exactly what Andrew Morton said. I think that the underlying issue is a human resources one. CA wants Linus and Andrew to spend all of their time working on "Enterprise" features and none of it on things like improving Linux's real-time performance and integrating drivers for non-server hardware. I think that they're being selfish and unreasonable, but that seems to be par for the course for CA.

      • Re:Just my $0.02 (Score:5, Insightful)

        by bogado ( 25959 ) <bogado&bogado,net> on Tuesday April 19, 2005 @07:57PM (#12288044) Homepage Journal
        They should then hire them and pay their accounts. It is as aimple as that, if you expect to be able demand them how they should expend their time, hire them. But good luck expecting that they will stay as maintainers if they sudenly forget about a hole part of the comunity who expect to run linux on their desktop.
        • by grishnav ( 522003 ) <grishnav@@@egosurf...net> on Wednesday April 20, 2005 @03:59AM (#12290575) Homepage
          quote osdl.org (emphasis mine):

          OSDL - home to Linus Torvalds, the creator of Linux - is dedicated to accelerating the growth and adoption of Linux in the enterprise. Founded in 2000 and supported by a global consortium of IT industry leaders, OSDL is a non-profit organization that provides state-of the-art computing and test facilities in the United States and Japan available to developers around the world. OSDL's founding members are IBM, HP, CA, Intel, and NEC. A complete list of OSDL member organizations is provided on the member page at OSDL Members.
      • by gfody ( 514448 ) on Tuesday April 19, 2005 @08:37PM (#12288376)
        what's wrong with a modular micro kernel design? why must these things be compiled in? what about late binding? I've never been a fan of the macro kernel design for exactly these reasons and it seems like obviously the wrong design for a kernel anyways?
        • by jc42 ( 318812 ) on Tuesday April 19, 2005 @10:16PM (#12289155) Homepage Journal
          what's wrong with a modular micro kernel design? why must these things be compiled in? ...

          Because that's not how Microsoft does it. And the business world will never accept linux until it's changed to mimic MS Windows' design. Haven't you been listening to what people have been saying here for the past N years? It's routine to point out a good design feature of linux and claim that that's why linux Isn't Ready For The Desktop, and won't be until that design is changed. This is mentioned more often than the impending death of *BSD.

          (Lessee, do I need a ;-) here? Nah ...

      • Re:Just my $0.02 (Score:5, Insightful)

        by orthogonal ( 588627 ) on Tuesday April 19, 2005 @11:16PM (#12289504) Journal
        "CA wants Linus and Andrew to spend all of their time working on "Enterprise" features and none of it on things like improving Linux's real-time performance and integrating drivers for non-server hardware."

        And this points to the real evolution in linux that has Microsoft sweating: what CA wants is a kernel that works better for businesses. Why? Because businesses have come to rely on linux.

        Business (in general, I'm not talking about CA specifically but about all the businesses that now use linux in their operations or, even more, in their firmware) to linux: "Linus, we didn't pay you to write the kernel, we didn't give you much help in writing it, we've often appropriated it and ignored our legal responsibility under the GPL while at the same time keeping out own drivers closed-source and binary only. But, ah, now that we use -- for free -- what started out as your hobby project, we expect you to give up your hobbyist ways and toe the line, because it's now our bottom line."

        This really isn't all that much different from the RIAA's "buggy-whip manufacturers'" outlook on file-sharing: "we've always made buggy-whips, and we loved it when Linus and the rest of the OS community were producing free leather for us to make buggy-whips, but now that you're producing those infernal auto-mobiles, we'll, you better stop before you threaten our profits."

        The one thing I've never liked about the GPL was that it gave the same rights to a for-profit business as to a fellow hobbyist. I'm more than glad to share my code with a fellow, who like me, is coding for the love of it. I'm a bit less happy to share with someone who just sees my uncompensated work as a way for him to parasite off it.

        Linus should tell CA that businesses have gotten far far more -- just in dollars, I'm not talking intangibles -- from Linus than they can ever repay, and that he's going to go on doing what makes Linus happy. After all, that attitude worked out pretty well for the parasites last time around.

        As for the rest of us, maybe those of us who can and do code should ask ourselves why we're so happy to give our work away for free to businesses that do their level best, day in and day out, not to give away anything for free.

        Is the GPL really our best answer?
        • Re:Just my $0.02 (Score:5, Insightful)

          by rgmoore ( 133276 ) * <glandauer@charter.net> on Tuesday April 19, 2005 @11:59PM (#12289730) Homepage
          The one thing I've never liked about the GPL was that it gave the same rights to a for-profit business as to a fellow hobbyist. I'm more than glad to share my code with a fellow, who like me, is coding for the love of it. I'm a bit less happy to share with someone who just sees my uncompensated work as a way for him to parasite off it.

          But how about sharing it with a multinational company that is able to throw massive resources into helping you to develop your program? If you shut out all companies you shut out the freeloaders, but you also shut out companies that would otherwise be helping your project. The Linux kernel isn't mostly the work of hobbyists, and it hasn't been for a long time. For many years Linus worked for Transmeta, who hired him in part because they wanted to use Linux with their chips, and now he works for OSDL, which is funded by big corporate Linux users. Alan Cox works for Red Hat. Marcello Tostatti works for Conectiva (now Mandriva or whatever they're calling it). The list goes on and on.

          And then there are the direct corporate code contributions. SGI has contributed XFS and a lot of work on NUMA. IBM has contributed a boatload of code including JFS, NUMA, and RCU, and they've tried to contribut more things that were eventually passed up because others came up with better solutions. Namesys developed ReiserFS. Many vendors have contributed drivers for their hardware. The Linux kernel wouldn't be nearly what it is today if those companies hadn't been contributing.

          The key thing to understand is that freeloaders don't actually cost anything, except for the bandwidth they use for downloads, but contributors help to build the software. It's smart to let anyone use the software because then anyone can be a contributor. Help from the IBMs and Red Hats of the corporate world more than pays for all the freeloaders.

        • Re:Just my $0.02 (Score:4, Interesting)

          by mcrbids ( 148650 ) on Wednesday April 20, 2005 @01:50AM (#12290170) Journal
          Is the GPL really our best answer?

          Apparently not for you. The neat part about licenses, is that they're so damn easy to cook up. For example:
          "This code is licensed under GPL 2, except that any changes must be posted to website http://foobar.com/projectname as long as such site is available".
          Simple, no? You could say that:
          "This code is open and free for any private, human entity. It may not be used, owned, or applied by any non-human entity, including Corporations, Trusts, or other fictitious legal entity in any form."
          Hard, wasn't it?

          See, as the licensor, you can put pretty much any term you want. There are *SOME* limitations, but they aren't what you might think.

          Ever READ the GPL? It's written in plain English, not Lawyerspeak. (Oh, and IANAL, all that jazz) When dealing with anything legal, lawyerspeak is to English was code is to specifications. It's intentionally a little halting because it's precise.

          If you figure your licensed product is worth millions, get an attorney. Otherwise, specify the terms you like, and enjoy!
      • Re:Just my $0.02 (Score:4, Insightful)

        by quarkscat ( 697644 ) on Wednesday April 20, 2005 @12:01AM (#12289746)
        Andrew and Linus have been doing a fantastic job on the linux kernel. CA apparently has their knickers in a knot because they expect someone else to build the enterprise kernel that they need/want. F/OSS is great in this regard, especially the kernel. Build it in, or leave it out -- how hard is that?

        And major F/OSS projects like linux aren't artificially hampered by the commercial OS vendors that want to sell a "desktop" version and a "server" version, or worse yet charge per client licenses (WTF!) Linux is imminently tweekable, runs on everything from embedded ARM7 to supercomputer cluster IA64. Stable linux distributions like Slackware offer far more compatability from desktop to server than RedHat's offerings (okay, FC4 is a "committee" project, not unlike the proverbial horse that became a camel).

        Perhaps CA just needs to hire some F/OSS consultants -- they could get on the cluetrain just by lurking on the forums like slashdot. So to CA, I say "Quit you're mewing!".
    • Re:Just my $0.02 (Score:5, Insightful)

      by spencerogden ( 49254 ) <spencer@spencerogden.com> on Tuesday April 19, 2005 @07:27PM (#12287808) Homepage
      To further expand on this, if CA thinks the kernel is unstable because developers are working on game drivers instead of stability, then they should hire some developers themselves. Part of your contract with open source is that you can't tell a guy working for free that he is working on the wrong thing. If you want a certain feature, here is always a price. There are plenty of examples of open source developers being hired by employers to work on feature the employer is specifically interested in.
      • by Sycraft-fu ( 314770 ) on Tuesday April 19, 2005 @07:57PM (#12288042)
        Because it may encourage people to just go to a commercial alternative. If you tell a company "We don't care about feature X, if you want feature X, hire a dev and code it yourself," they may do an analysis on it and determine you know what? It would cost us $50,000 to have a contractor develop this whereas we could buy a commercial solution that does what we want for $10,000.

        This is espically true for companies who's core bussiness isn't IT or engineering or the like. If a company just uses computers as a means to an end and they don't really have a tech staff, it can be expensive, difficult and risky to contract someone to do the development they need. Better to just get a commercial solution.

        I'm not saying this means OSS devs need to jump up and meet every request from every person that whines, that's clearly impossible. However I find that the OSS community in general is way to fast to say "It's open, if you want the feature, write it yourself!" Rather the merit of the request should be weighed, it may be worth your while to work on. If it's not, then you should give reasoning as to why not, and not just say "Do it yourself."
        • For one, this is CA, their job is writing software. But I think you realize that.

          As for other situations. If you are going to get a certain level of support for a product (new features, custom installations), that is going to cost you a certain number of dollars, whether it be licensing costs (you need to be a large enough customer to have that level of influence with a vendor), or it be in hiring developer time to work on an OS project.

          I would love to see some sort of feature wishlist where smaller companies could vote with their dollars on certain bugs or features. I've heard of bounty systems like this being tried, and I would love to hear more about why they haven't really worked yet.

          You are right about the OS community being quick to jump on the "code it yourself" excuse. But that is reality of dealing with volunteers. Some are motivated by competing with commercial products, and will work on features to make that happen. Others are totally unconcerned with what corporations think about their work. At the end of the day, many developers are scratching their own itch and shouldn't be expected to care about what other people want their software to do.

          At the same time as some people are quick to jump on this excuse, others are quick to assume that the goal of OS should be to beat proprietary software. This is simply not many peoples goal.
    • Re:Just my $0.02 (Score:4, Insightful)

      by El Cubano ( 631386 ) on Tuesday April 19, 2005 @07:28PM (#12287818)

      If you don't want it, don't compile it in.

      It gets better. If someone says "but I use a stock kernel," remind them that they don't have to load every module under the sun.

      This guy would be better off going off to tell hardware manufacturers to quit making new hardware. Yeah right! Also, why does he not complain about bloat in the Windows kernel? IIRC, there is a much larger segment of hardware supported in Windows than in Linux. Mehtinks his statement should be modded -1 Flamebait.

    • Re:Just my $0.02 (Score:5, Interesting)

      by dindi ( 78034 ) on Tuesday April 19, 2005 @07:50PM (#12287993)
      well while I agree on that, I would be happy to have a modular download option to the kernel...

      grr let me rephrase : an option to download only stuff i need, eg i could only get the sources that i actually selec in the kernel config gconfig,menuconfig,config, whatever feels good ..

      on the other hand if you cannot download 40 megs buy a distribution on cd/dvd or use windoze

      I am happy with the kernel, and however is monkey enough to compile everything IN and than complain about it being big well uhm .... maybe just use a precompiled modular with autoload modules

      i love the kernel supporting more and more of the junk i can stuff into my machine to enhance my gamin...video... I mean work and productivity ...
    • by jc42 ( 318812 ) on Tuesday April 19, 2005 @10:10PM (#12289111) Homepage Journal
      Big changes are coming to the kernel that are really going to improve Linux and its functionality in the buisness and home world.

      Yeah, and we know that Linux Will Never Be Ready For The Desktop until firefox and thunderbird are integrated into the kernel.

  • No problem (Score:5, Funny)

    by Anonymous Coward on Tuesday April 19, 2005 @07:15PM (#12287716)
    A real step towards the desktop is for the average user to be able to build a sleek customized kernel, right?
  • Isn't that why, (Score:3, Interesting)

    by Grand Facade ( 35180 ) on Tuesday April 19, 2005 @07:15PM (#12287719)
    Isn't that why you compile your own kernel?

    FP?
    • Re:Isn't that why, (Score:4, Informative)

      by Stevyn ( 691306 ) on Tuesday April 19, 2005 @07:21PM (#12287763)
      Even if you don't compile it and rely on your distro for it, don't they usually make everything that's not required for booting as a module? So if you don't have the hardware and you don't need the driver, the module is never loaded and you don't waste the memory.

  • "fatter" (Score:3, Interesting)

    by Wienaren ( 714122 ) on Tuesday April 19, 2005 @07:17PM (#12287731) Homepage
    Bullcrap. Who likes installing zillions of extra drivers when updating the kernel?!

    And about "fatter": I don't get it. You will probably use ONE sound driver, ONE (or perhaps two) network drivers, etc. Just the fact that the *amount* of drivers is gettling bigger, does not mean the kernel "is getting fatter".
    • Re:"fatter" (Score:5, Insightful)

      by garcia ( 6573 ) * on Tuesday April 19, 2005 @07:36PM (#12287875)
      Because there arne't default kernel options for various tasks and because it's not exactly user friendly to configure and compile your own kernel people end up compiling in crap that they don't need.

      The kernel is fine it's the setup that sucks.
      • Re:"fatter" (Score:4, Insightful)

        by Wdomburg ( 141264 ) on Tuesday April 19, 2005 @07:47PM (#12287971)
        This is why there are kernel modules. As much as linux ricers like to argue otherwise, there's virtually no reason a normal end user should ever build their own kernel. Nor should their be. The idea that compiling a kernel should ever be optimized for average joe end users is stupid.
        • Re:"fatter" (Score:4, Insightful)

          by garcia ( 6573 ) * on Tuesday April 19, 2005 @07:50PM (#12287991)
          As much as linux ricers like to argue otherwise, there's virtually no reason a normal end user should ever build their own kernel.

          So that loading the kernel on 100s of machines is as easy as distributing a single file rather than a distribution of files.

          Personally? I never used modules when I could just compile it all in. It's easier to transport that way.
    • Re:"fatter" (Score:5, Interesting)

      by DavidTC ( 10147 ) <slas45dxsvadiv D ... neverbox DOT com> on Tuesday April 19, 2005 @07:40PM (#12287911) Homepage
      Yeah.

      I already have a third part driver, from linux-wlan-ng, and every time I upgrade the kernel I have to remember to recompile it again.

      The kernel should have everything. Obvious, for licensing reasons, only GPL stuff, but everything that's GPL, and is a kernel driver, and is up to minimal code standards.

      In fact, having to track down third party drivers has been a much more valid complaint than 'Too many drivers', which is just idiotic.

    • Re:"fatter" (Score:3, Interesting)

      But before one can compile the kernel, one has to download and un-gzip/tar it, configure it then build it and then hope it works - assuming it builds, which is not always the case.

      How many people actually use I2O, HAM and all that exotic hardware Linux can support? Spinning off all the exotic sections into separate downloads would seriously reduce the average download size. For fairness to server people, I suppose even the sound system could be dumped into a separate archive.

      If "make *config" conveniently
      • Re:"fatter" (Score:3, Insightful)

        by nzkbuk ( 773506 )
        Going to the trouble of removing sections from the kernel would only bring us back to the days when it was quite difficult to get all your hardware working and you had to search for drivers.

        If you're going to run a typical "server" for a business then a 20-50mb download isn't that much. combine it with it's source so you can build a different kernel for each server (if needed).

        Yes there are large sections if the kernel I've never touched (and I doubt I ever would), but I for one still want to see it in th
  • Two Sides (Score:3, Insightful)

    by FyberOptic ( 813904 ) on Tuesday April 19, 2005 @07:17PM (#12287732)
    I can see why some people would have a problem with this, such as those that see Linux as a networking OS or for more of an embedded system. But if Linux folks ever want to see Linux as an OS for the masses, you have you cater to the average joe, and offer all of these features for games and video and the like, if it's ever to compete with the media abilities of Windows and Mac.
  • by Vlad_Drak ( 20809 ) on Tuesday April 19, 2005 @07:17PM (#12287736)
    That lets you not have ISDN, USB Dildo, and/or Ham radio support.
  • Hypocritical (Score:5, Insightful)

    by sisko ( 114628 ) on Tuesday April 19, 2005 @07:17PM (#12287737)
    I think it's laughable that Computer Associates talking about other people's bloated software.
  • by linuxhansl ( 764171 ) on Tuesday April 19, 2005 @07:18PM (#12287741)
    Surely nobody worries about the size of the source tarball.

    Drivers that are not compiled are not taking any additional space. Drivers that are not used all the time can be compiled as modules...
    So I guess I do not understand the criticism here.

    • by panic_paranoia ( 746533 ) on Tuesday April 19, 2005 @07:31PM (#12287836)
      I'm perfectly content with compiling a kernel to suit my own needs, however many distros aimed at newbies tend to go for the "support every device possible" approach for a default install. For example, I recently installed mandrake on a machine for a friend (simple default install) to find it loading support for pcmcia, bluetooth, and many other completely unnecessary modules and services. What newbie knows how to disable services or build a more customized kernel? To be fair, this is not a problem with the official kernel source but with the way many distros make use of its capabilities.
      • to find it loading support for pcmcia, bluetooth, and many other completely unnecessary modules

        Does it actually load them or does it just print a message which indicates it's going to do so, check for hardware, and exit when it can't find it? If it does actually load the modules, won't they be unloaded after a while if they aren't used at all?

        For example, if I try to load a pcmcia module on a destop machine from the command line it indicates it cannot find the hardware, and exits. I suspect the only diff


      • Have you ever installed a late version of Windows?

        Watch the installer load device drivers for every known weird form of RAID before it even begins to ask you how you want to install the OS?

        And then how long does it take to do "hardware detection" - versus Knoppix that does it all in the three minutes or so it takes to boot from CD?

        Yes, Windows is bloated - bloated with (so-called) "features", not drivers. If Linux makes THAT mistake, we can complain. Having a bunch of drivers and support for oddball subsystems loaded into the kernel is not serious and until somebody DEMONSTRATES a stability problem, it's bullshit.

        So far I've heard nobody say the 2.6 kernel is in FACT unstable because of x, y, z drivers or subsystems.

        • by Anonymous Coward on Tuesday April 19, 2005 @09:11PM (#12288649)
          So far I've heard nobody say the 2.6 kernel is in FACT unstable because of x, y, z drivers or subsystems.

          I'll say it: the 2.6 kernel is unstable on x86_64 platforms with USB 2.0 mass storeage devices. There are bug reports everywhere. The response? "It's fixed." The reality? The system locks up like Fort Knox whenever it's booted with a USB 2.0 mass storeage device attached.
      • Ummm.. you don't need to recompile a module to turn off pcmcia or bluetooth. /sbin/chkconfig pcmcia off /sbin/chkconfig bluetooth off

        There's also a GUI tool for this. For that matter, you could not select those services to start in the first place. There's a dialog for it in the installer.
  • WTF? (Score:3, Insightful)

    by fatboy ( 6851 ) * on Tuesday April 19, 2005 @07:19PM (#12287745)
    On the enterprise front, Morton said he expects to merge code from Cambridge University's Computer Laboratories' Xen virtualization technology into the Linux kernel within the next few months. Xen "does the right thing technically," unlike other technologies, which are mainly workarounds for the fact that the operating system is not appropriately licensed, Morton said.

    Huh????
    • Re:WTF? (Score:3, Interesting)

      by fatboy ( 6851 ) *
      Please don't mod me as a troll. I just don't understand what Morton ment by that?

      Can someone explain it to me or is this just a badly written article that is referring to the license of other virtualization technologies.
    • Re:WTF? (Score:5, Informative)

      by Lemming Mark ( 849014 ) on Tuesday April 19, 2005 @07:40PM (#12287910) Homepage
      [ disclaimer: I'm a Xen developer ]

      I'd say the parent is a fair question, not a troll.

      Morton's point appears to be this:
      * x86 is notoriously unco-operative to full virtualisation
      * trying to fully virtualise it (as VMWare and Virtual PC do) is a work around for the fact you can't modify the guest OS because it's closed source
      * fully virtualising x86 in software results in rather painful performance hits for many workloads and a very complex hypervisor
      * for open source OSs, it therefore makes sense to use paravirtualisation. This involves porting the OS to a special virtual machine-oriented "architecture", closely resembling the real hardware but without the costly-to-virtualise parts.
      * paravirtualisation can be argued to be better than full virtualisation because (esp. on x86) the performance hit is much lower.

      Porting of open source OSs is happening: Linux 2.4 and 2.6, NetBSD, FreeBSD 5.3 and Plan 9 can run on Xen (although currently only the Linuxes are supported as "host" or "Dom0" operating systems).
      • Re:WTF? (Score:4, Informative)

        by Anthony Liguori ( 820979 ) on Tuesday April 19, 2005 @08:38PM (#12288387) Homepage
        * fully virtualising x86 in software results in rather painful performance hits for many workloads and a very complex hypervisor

        Something I think Sam missed is that Xen also supports VT which provides full-virtualization on the x86 (which makes Xen undeniably a true-hypervisor).

        Compiler-driven para-virtualization is an interesting emerging area of research too that should make porting OSes to Xen much simplier.

        All we need now is a really cool hypervisor-aware file system.. like a XenFS ;-)
  • by ShaniaTwain ( 197446 ) on Tuesday April 19, 2005 @07:19PM (#12287753) Homepage
    "We are not interested in the game drivers and music drivers that are being added to the kernel."

    ..we want text, orange, perhaps green on a black background. We want large buzzing metal boxes that only we are allowed access to. We want to store our data on large spinning reels of magnetic tape, or better yet punch cards.

    also we want a sandwich.

    That is all.
  • by jm92956n ( 758515 ) on Tuesday April 19, 2005 @07:20PM (#12287759) Journal
    As a proud owner of a Celeron 500mhz machine, I must express my concern.

    The problem, I think, is that developers tend to be people who love computers. And people who love computers tend to have nice rigs, just as people who enjoy cars tend to spend a disproportionatly large amount of their income on cars (ever see the parking lot at a lan party--complete with people pulling multi-thousand dollar machines out of the hatch of a Hyundai?).

    Perhaps Linux needs more developers from third world nations; the kid from a rural village with intermitant electricity getting his hands on an old, but useful machine and learning that he, too, can tell it to do all sorts of things!

    • by McDutchie ( 151611 ) on Tuesday April 19, 2005 @07:30PM (#12287829) Homepage
      As a proud owner of a Celeron 500mhz machine, I must express my concern.

      This proud owner of an AMD K6 300 MHz has compiled and runs Linux 2.6.11.7 without a hitch, and continues to not see the problem.

    • As a proud owner of a Celeron 500mhz machine, I must express my concern.
      I have several old Pentium II machines running at speeds ranging from 233 MHz to 400 MHz that are running the latest kernels just fine working as servers.

      My point with this is that it's not the kernel that's making GNU/Linux systems crawl on older hardware. It's the newer versions of GNOME and KDE. As long as you aren't running GNOME or KDE, older hardware works just fine. My servers chug along just fine, and my 233 MHz laptop with 64 MBs of RAM running Sawfish also suffices just fine to do virtually all my common tasks (except running any Mozilla product :-P ).

      So, certainly, GNU/Linux may need more developers from third world nations, as you put it. Linux, however, does not.

    • by Master of Transhuman ( 597628 ) on Tuesday April 19, 2005 @08:06PM (#12288120) Homepage

      It's ridiculous to suggest that the kernel layout should be restricted to the level of a 486.

      First of all, you can already do that if you know what you're doing. People in the Third World either know what they're doing or get their machines from people who do - just like in the rest of the world.

      Secondly, there are tons of stripped down distros. Pick one.

      This is merely asking for your cake and eating it, too - you want the latest kernel and everything it can support to run on the oldest hardware.

      Try it with Windows 20003 Server.

      Then go back and read the specs for Longhorn: a GB of RAM, a terabyte of hard disk, and a minimum 3GHz CPU.

      The Linux kernel is intended to push the boundaries of OS technology - not run on every Third World machine in existence.

      Yet, at that, as I pointed out, Linux is incredibly flexible in what it will run on compared to virtually every other OS in existence.

      All of this is just utterly pointless criticism.

  • this is nothing new (Score:5, Interesting)

    by winkydink ( 650484 ) * <sv.dude@gmail.com> on Tuesday April 19, 2005 @07:20PM (#12287760) Homepage Journal
    I worked for a UNIX computer mfg in the late 80's. Even then there were arguments about kernel-bloat.

    What would be cool is if the linux distros had default kernel options, much the way some of the majors have Workstation, Server, etc... that would adjust the kernel based on how the machine was being used.

    Yes, I know one can reconfigure the kernel by one's self, but it then requires personal care and feeding for patches, upgrades, etc... It becomes one more thing one has to do. Personally, unless I really need it, I'm not goign to bother... too much of a PITA
    • What would be cool is if the linux distros had default kernel options, much the way some of the majors have Workstation, Server, etc... that would adjust the kernel based on how the machine was being used.

      Slackware has this (or something rather like this) -- it comes with a whole set of kernels compiled for different kinds of hardware.

  • BS (Score:5, Funny)

    by DrLZRDMN ( 728996 ) on Tuesday April 19, 2005 @07:21PM (#12287762)
    I have never heard of there being a problem with too many music drivers in the Linux kernel....Or any music drivers in the Linux kernel....In fact, I have never heard of music drivers at all
  • by amigabill ( 146897 ) on Tuesday April 19, 2005 @07:22PM (#12287767)
    We're all entitled to our opinions. While CA isn't interested in more drivers or game support, other users are. Conversely, things CA will want are less important to other users.

    I myself would like better multimedia drivers, good solid and easy to install and configure drivers for my PVR-250 and pcHDTV tuner cards in my MythTV box. CA may not give a darn about those at all, but this is my primary Linux goal and getting my particular MythTV rig running is the only application I myself presently give a darn about in all of Linux land.

    I myself do not give a darn about gaming support either right now. That may change in the future if I decide to expand on MythTV and turn the thing into a high-end game console as well. But for the moment I'm not interested, just as many gamers may not be particularly interested in TV tuner drivers.

    Though keeping stability and efficiency as primary goalsagreeably is a good idea. But I think high-quality (ie. NOT alpha or beta) drivers for more hardware should also be important.
  • Probably true (Score:3, Interesting)

    by GomezAdams ( 679726 ) on Tuesday April 19, 2005 @07:24PM (#12287782)
    It's most likely that most corporate users are not interested in the entertainment aspect and would like to use only the core parts of the kernel and to have them as stable as possible. I say, get a kernel hacker or two and cut your own kernel for corporate use. It's OPEN SOURCE for a reason. Although IBM uses a "standard issue" Red Hat or SuSE kernel for their enterprise systems and do quit well, thank you very much.

    On the other hand I was screwed so badly by CA that my automatic reation to anything they say or do is to discount it as coming from that Den of Thieves and Liars.

  • by iPaul ( 559200 ) on Tuesday April 19, 2005 @07:24PM (#12287783) Homepage
    What the heck is up with CA? At the same time they praise Linux, they seem to turn around and bash it. If they have an agenda it's not clear. Criticism like this doesn't smell right, especially after the flack they gave Linux in Australia. Something's fishy but I can't seem to see what it is.
    • by The Bungi ( 221687 ) <thebungi@gmail.com> on Tuesday April 19, 2005 @07:32PM (#12287850) Homepage
      Why must all criticism of anything open source must be labeled "bashing"? Every single time someone dares utter a single word of "dissention" about Linux or anything else, they must be "shills" spreading "FUD". Every single time. Hell, even when ESR rants about how CUPS sucks, he's branded a retard. A lot of things in free software suck to high heaven. Just like a lot of commercial software does. But the FOSS unwashed masses really need to get a grip. Not everything is perfect just because it comes with source and a bill of rights.
      • by iPaul ( 559200 ) on Tuesday April 19, 2005 @07:41PM (#12287919) Homepage
        You're right, genuine criticism is not bashing. Bashing is what happens when people crap all over something with no intent other than to sling mud. For example, "Linux is currently not a suitable desktop operating system for most users" is a criticism. "It would be impossible to turn the bloated Linux kernel into a desktop operating sytem because of it's rampant IP issues" is a bash. The statements CA made in Autralia were a bash because they were made in conjunction with Sun to puff up Solaris and put down Linux, but done so under the guise of an educational rather than promotional event. Consequently, that's a bash.
      • by dbIII ( 701233 )

        Why must all criticism of anything open source must be labeled "bashing"?

        This sucks because of something explained clearly = criticism.

        This sucks because of something I cannot understand enough to clearly articulate or really know whether it sucks = bashing.

        "Getting fatter" is an analogy in the first place, and since it is talking about the size of the download and not the executable, not paticularly relevent. It isn't clear either whether "stable" is used in the context of "more code just keeps coming

  • by anti-NAT ( 709310 ) on Tuesday April 19, 2005 @07:24PM (#12287785) Homepage

    CA have contributed so much to the Linux kernel, so they know what they're talking about. NOT.

    What is CA's motive in saying this ? They have no real experience in developing operating systems, nor are they producing data and a testing methodology to backup their opinion.

    It seems to me they might be talking through their hat. [cambridge.org]

  • Inevitable event (Score:3, Interesting)

    by Blitzenn ( 554788 ) on Tuesday April 19, 2005 @07:25PM (#12287792) Homepage Journal
    I hope I don't get a troll rating on this, but I think that as any kernel grows, it becomes exponentially more difficult to project all of the possible interactions between components. Some of the ones that get missed or go untested simply because they weren't foreseen cause problems. Any kernal is going to become more unweildy as it grows. Especially when it starts to encompass so many diverse tasks and support multitude of dedicated roles. I think to attribute problems such as this article talks about as specific to Linux is biased.
    • by hoxford ( 94613 )
      This is true of any software system. As it grows, complexity increases. The answer isn't (can't be) to stop growth. The answer is to manage the complexity by using well-designed and well-defined interfaces and minimizing side effects.
    • by value_added ( 719364 ) on Tuesday April 19, 2005 @07:46PM (#12287964)
      I hope I don't get a troll rating on this, but I think that as any kernel grows, it becomes exponentially more difficult to project all of the possible interactions between components.

      Actually, that's not the case at all according to this new NY Times Article [nytimes.com]

      ...the Purdue researchers say the real explosive secret lies in the hull, or pericarp ... In some varieties, the pericarp becomes more moistureproof as it is heated, sealing in the steam until the pressure gets so high that the hull fractures and the kernel goes pop.

      In other varieties that don't undergo heat-induced change, the moisture escapes, the hull never breaks and then the kernel goes pfffft.

  • BAAA (Score:4, Insightful)

    by bgog ( 564818 ) * on Tuesday April 19, 2005 @07:26PM (#12287794) Journal
    So then don't build them you insensitive clod. Why do people seem to then that the kernel is ONLY for them and their market. Just because there is a driver doesn't mean it needs to bloat your kernel. With simple config options you can build a very small tight kernel.

    If anything the extra junk benefits them because the folks developing those drivers are likely to find bugs in the kernel proper.
  • by Greyfox ( 87712 ) on Tuesday April 19, 2005 @07:30PM (#12287828) Homepage Journal
    There've been concerns about kernel bloat since the 1.3 kernel. I recall there was quite a ruckus when the compressed kernel tarball went over 10mb. But yanno it's gotten more robust and added support for a lot of modern features (Especially in networking) that I really do appreciate having the choice of compiling in. And I'd be surprised if the source was anywhere near the size of the commercial UNIX kernels much less Windows or one of the mainframe OSes. The build system seems to be pretty well capable of containing the bloat as well.
  • by G4from128k ( 686170 ) on Tuesday April 19, 2005 @07:31PM (#12287834)
    I suspect that all long-lasting, end-user OSes tend toward bloatware. Macintosh went through this with OS 7 through 9. Windows appears to be doing this as it progresses to Longhorn. It's just the natural evolution of software to accumulate cruft on the basis of yet another nifty feature that must be added into the bowels of the OS until the development effort becomes so constipated that the next version never appears or is so complication/unstable that people abandon it.

    The trick, for Linux, will be to do what Apple did in moving to OS X -- create a new, "from-scratch" (yes, I know Apple borrowed a lot from others), OS with some form of compatibility-creating layer or old-kernal box. Incrementalism only takes an OS so far before revolution is needed to build a new, better system from the ground up.
  • I'm torn (Score:5, Interesting)

    by MadChicken ( 36468 ) on Tuesday April 19, 2005 @07:38PM (#12287888) Homepage Journal
    I don't know what to think about this. On one hand, I used to brag about how Linux never ever crashed on me (not ONCE), despite my heavily tinkering with it. This was, I think, way back in the 2.0 days. Ever since, with a few generations of kernels, I had to eat those words far too often.

    I really miss the days when I could run on a P166 with 32 MB of RAM, and KDE ran not too badly (as long as you don't try to open Netscape or StarOffice). I don't think this kind of performance is attainable at all anymore.

    But on the other hand, I'd be loth to run a kernel that didn't at least support USB! I love having ALSA instead of the old mishmash of sound drivers. Ext3 was a relief. I must say that for me at least ip[tables|filter|chains] was confusing, but I trust that the best choices were made... Going back to a kernel that didn't have those features would be simply unnaceptable.

    Has the kernel reached a level of complexity where the ol'time stability isn't likely to happen anymore? We just need to react with patches, just like the other OSs out there?
    • Re:I'm torn (Score:5, Insightful)

      by Malor ( 3658 ) on Tuesday April 19, 2005 @09:32PM (#12288783) Journal
      Somewhere along the way, the kernel devs seemed to have dropped 'high reliability' as one of their requirements, and Linux is suffering badly for it. I've had trouble with 2.6 just on my toy servers at home... APIC problems interfering with the md driver, for instance. It directly cost me quite a bit of money to buy hardware, troubleshoot, and eventually realize it was the kernel at fault, not the hardware. I shudder to think of what small businesses must be spending to fix 'hardware problems' that aren't.

      It's my belief that the kernel won't really stabilize until they branch off to 2.7. They're too focused on adding new features for the code to ever really shake out and get stable. They're shoveling new stuff in there way way way faster than it can really be debugged.

      And they just wave their hands in the air and say that it's up to the distros to make this mess usable.

      Until they get over this phase, in which they're pushing the hard work of debugging onto everyone else in the world, the kernel is not going to stabilize. And we will be held hostage by particular vendor kernels, instead of being able to track the 'one true Linux'. If we start with Redhat, we're stuck with Redhat. In the past, we were able to fall back on the One True Kernel if Redhat or Mandrake made a mistake. But that's not really an option anymore... tracking the One True Linux is now dangerous, because the kernel devs don't really care if it works right.

      I can't find the precise quote right now, because I can't see my old comments on Slashdot... apparently I now have to pay for the privilege of seeing my OWN old comments .. but one of the senior kernel devs said, approximately, that getting 1 out of 3 stable kernels actually stable was an acceptable outcome.

      Until that mindset changes, Linux is just not trustworthy. It needs to be made right BY THE PEOPLE WHO WRITE IT. You can't hack reliability in as an afterthought, it has to be a major focus all the way along. This is exactly the sort of crap we always derided Microsoft for... ship it buggy and then fix it later. I hated this behavior in Microsoft. I hate it just as much in Linux. I switched to Linux because it was, first and foremost, reliable. It no longer offers me that, and I am starting to switch machines over to the BSDs now.

      Waving one's hand and expecting 'the distributions' to do the grunt work of actually making the kernel stable is just wishful thinking... it's expecting other people to do the job that should be the very first one on their list. Reliability is THE MOST IMPORTANT FEATURE. It's not fun, it's not glamorous, but it's what got Linux so popular that these guys actually get paid to do it. If it doesn't return to relatively bulletproof status, then people are going to use other solutions instead, and there won't be as many Linux jobs available.

      It's the reliability that creates the jobs. I wonder if they really grok this?
  • by jbellis ( 142590 ) <jonathanNO@SPAMcarnageblender.com> on Tuesday April 19, 2005 @07:40PM (#12287913) Homepage
    saying "just don't compile the options you don't want."

    Problem is, that doesn't affect the main problem, which is that 3 million lines of options code is a LOT harder to keep bug free among all the different combinations than 1 million loc.

    All bugs may be shallow given enough eyeballs, but the difficulty of debugging the linux codebase may well be increasing faster than the number of eyeballs.
    • by shellbeach ( 610559 ) on Tuesday April 19, 2005 @08:20PM (#12288243)
      Problem is, that doesn't affect the main problem, which is that 3 million lines of options code is a LOT harder to keep bug free among all the different combinations than 1 million loc.

      But isn't most of that code base specific drivers for specific hardware, maintained by individuals who wrote that code? Are you saying that instead of including possibly buggy drivers, it would be better to leave them out and give no support at all to people who happen to have that hardware??

      Remember, any potential bugs in drivers won't affect anyone who doesn't have that hardware - these drivers are compiled in default kernel distributions as modules and never get loaded unless they're needed. All it means is that the kernel modules take up a bit of disk space, which is trivial compared to the sizes of current hard disks. They don't impede performance and they don't do any other harm. I really can't work out what all the fuss is about ...
  • Thanks, CA (Score:4, Informative)

    by SamMichaels ( 213605 ) on Tuesday April 19, 2005 @07:45PM (#12287954)
    At the risk of getting flamebait or troll, I'll speak my mind anyway.

    How about trying out this GREAT utility called "menuconfig"...then you can unbloat your kernel. In the time it saves you from manually editing your .config, you can unbloat YOUR products.
    • Re:Thanks, CA (Score:5, Insightful)

      by HalWasRight ( 857007 ) on Tuesday April 19, 2005 @08:06PM (#12288115) Journal
      Thank you for the blessed menuconfig. Gosh. Really.

      Menuconfig is just the window to the maze that is the kernel ifdefs. You have no idea of the size or speed impacts of the options you through if the help doesn't tell you. You have no idea of the component interactions.

      Menuconfig is just a parking place for problems. The real problem is too many options, and not enough testing of the combinations. That is what CA is complaining about.

      • Re:Thanks, CA (Score:4, Insightful)

        by Legion303 ( 97901 ) on Tuesday April 19, 2005 @09:34PM (#12288810) Homepage
        "The real problem is too many options, and not enough testing of the combinations. That is what CA is complaining about."

        Is it? Because in the article Greenblatt snivels about "too many game drivers!@" and then breaks down completely and starts complaining that Xen "doesn't do enough." I'm not sure which side of the fence he's on. I do know that if I don't have an ATI Radeon in my system I'm not going to be totally baffled by the vast array of ATI driver options. But I don't work for CA.
  • by whichpaul ( 733708 ) on Tuesday April 19, 2005 @07:51PM (#12288005) Journal
    How embarrassing for CA.

    Yeah, I personally find increased driver support a real problem ;P. The last thing I'd want is to NOT have to go scouring the net for some obscure driver.

    If he wants an OS for which you can't optimise the kernel in anyway try microsoft.com. I hear there are a couple there. ;)
  • CA's kernel demands (Score:5, Informative)

    by sloanster ( 213766 ) <ringfan&mainphrame,com> on Tuesday April 19, 2005 @09:23PM (#12288715) Journal
    We are not interested in the game drivers and music drivers that are being added to the kernel. We are interested in a more stable kernel.'

    No offense, but he sounds pretty clueless here - not to mention the fact that there is no "game driver" or "music driver", perhaps he is referring to device drivers and/or low-latency features, which allow for a better gaming/multimedia experience...

    In any case, he completely misses the point that the kernel, as shipped by the distros, is modular. That means, if a device isn't present, or isn't used, the driver for that device never gets loaded into memory. So it doesn't really matter how many devices are supported, the only device drivers affecting the size of the kernel are the ones loaded into memory on the machine in question.

    I find Greenblat's attitude ridiculous, since he seems to be saying that the kernel developers need to focus on what Sanm Greenblat is interested in, and to hell with people who want to do cool and interesting things with linux, which aren't part of CA's business plan.

    I could go on, but that's enough for a first impression.
  • The Big Bloat (Score:5, Insightful)

    by Brandybuck ( 704397 ) on Tuesday April 19, 2005 @09:41PM (#12288867) Homepage Journal
    I think people are missing the real issue in their anger over someone criticising the Holy of Holies. In case you missed it, the issue is that Linux is getting fat and bloated.

    linux-2.6.11 is forty four megabytes. Gzipped up. I don't want to waste my bandwidth downloading it to see what it is unzipped, but trust me, it's massive. Where does all this bloat come from? Drivers. Drivers are good, but the current kernel paradigm (and Linux isn't alone in this) is that every driver has to be included with the kernel. So we end up with huge packages and huger repositories where everything is required to reside.

    Imagine the size of Linux when we finally get to the goal of having every past and current device with a dedicated driver in the source tree. You're talking possibly ten gigabytes uncompressed. Even if you're not using 99.9% of those drivers, they're still there. The day may come when you can actually build the kernel faster than you can make its dependencies.

    Could you imagine a KDE or GNOME where every core, addon, auxiliary and experimental component was all part of one single tarball? Even if you only wanted GTK+ and GIMP, you still have to download and configure the entirety of the GNOME repository to get it. That's what it's like with the Linux kernel.

    It's time non-core drivers got split off from the main Linux project. If you don't need to add anything into the kernel to get driver to work, then put it in the driver subproject and don't bug the big guys with this penny ante crap.

Talent does what it can. Genius does what it must. You do what you get paid to do.

Working...