Forgot your password?
typodupeerror
Linux

According to Linus, Linux Is "Bloated" 639

Posted by timothy
from the he-was-there-when-it-happened dept.
mjasay writes "Linus Torvalds, founder of the Linux kernel, made a somewhat surprising comment at LinuxCon in Portland, Ore., on Monday: 'Linux is bloated.' While the open-source community has long pointed the finger at Microsoft's Windows as bloated, it appears that with success has come added heft, heft that makes Linux 'huge and scary now,' according to Torvalds." TuxRadar provides a small capsule of his remarks as well, as does The Register.
This discussion has been archived. No new comments can be posted.

According to Linus, Linux Is "Bloated"

Comments Filter:
  • Problem (Score:5, Insightful)

    by sopssa (1498795) * <sopssa@email.com> on Tuesday September 22, 2009 @08:53AM (#29502639) Journal

    "Okay, so the summary of this is that you expect that 12 per cent to be back to where it should be next year, and you expect someone else to come up with a plan to do it," joked Bottomley. "That's open source."

    That is also the problem. Everyone adds pieces and eventually it starts to become a mess. Then someone else should fix it.

    • Re:Problem (Score:5, Insightful)

      by Anonymous Coward on Tuesday September 22, 2009 @09:01AM (#29502721)

      That's all software.

    • That's called technical debt, it happens in every project: open, proprietary, big, small, one developer or a 100.

      • Re:Problem (Score:4, Insightful)

        by sopssa (1498795) * <sopssa@email.com> on Tuesday September 22, 2009 @09:08AM (#29502805) Journal

        But when its open source, it's easier to think that maybe I cant be bothered to look at this now, someone else can do it. When its proprietary software and you get the assignment to look at it, you pretty much have to do it.

        • Re:Problem (Score:5, Insightful)

          by Galactic Dominator (944134) on Tuesday September 22, 2009 @09:15AM (#29502867)

          Properly managed opensource projects deal with this appropriately, some do not.

          Properly managed proprietary projects deal with this appropriately, some do not.

          • Re: (Score:3, Insightful)

            by Hal_Porter (817932)

            How does that work? In a proprietary project if your boss says "do this" you either do it or find another job. In an open source project you could just flame the hell out of the guy that told you on the public mailing list and carry on working on something else.

            And in a proprietary project if customers want something fixed they can threaten to not pay which in even the most incompetent company will tend to make your boss tell you to fix it. In open source that mechanism does not exist.

            • Re: (Score:3, Informative)

              by MadnessASAP (1052274)

              It's the same as any other volunteer work, you have absolutely no obligation to do the work but if you don't then your not going to be invited back and your work will be refused.

              • Re: (Score:3, Interesting)

                by master5o1 (1068594)
                So really:

                properly managed (volunteer) open source projects deal with this appropriately, some do not.

                I say "(volunteer)" because I think I recall some open source projects having only certain contributors, as opposed to anyone.
            • Re:Problem (Score:4, Interesting)

              by Galactic Dominator (944134) on Tuesday September 22, 2009 @09:43AM (#29503189)

              In FreeBSD, you chose to accept a project. If you fail to perform, you are replaced with another volunteer. It doesn't matter if you're a core committer or a port maintainer, it all works that way. There are occasional problems but overall a successful approach. Many other opensource projects do the same. That's why hierarchies work in opensource--they hold people accountable just like in a proprietary project.

            • Re:Problem (Score:5, Interesting)

              by oiron (697563) on Tuesday September 22, 2009 @09:52AM (#29503283) Homepage

              It gets done because ultimately somebody says "Fuck this, I can't work on this bloated codebase any longer. We're refactoring, guys!"

              Then, if the old lead dev / maintainer / admin doesn't like it, a fork happens...

              Projects where this has happened before: The kernel itself, several times (as well as various subsystems, again several times), X (XFree to XOrg), KDE (2-3, 3-4), Amarok (1.x to 2.x), SodiPodi -> Inkscape, Firefox from 2 to 3... These are off the top of my mind, of course - there are lots more.

              Of course, there are some cases where this process has failed. I don't think the failure rate is any higher (or lower) than proprietary projects, though...

              The incentives are different, but they exist, nevertheless...

              • Re:Problem (Score:5, Interesting)

                by quanticle (843097) on Tuesday September 22, 2009 @10:52AM (#29504057) Homepage

                Precisely. The grandparent is forgetting that, in the proprietary world, the scenario you described can't happen. I can't go to my boss and tell him, "Screw this, I'm going to spend the next month refactoring our messy code, rather than adding new functionality." However, I can do that in an open-source project.

            • Re:Problem (Score:4, Funny)

              by amplt1337 (707922) on Tuesday September 22, 2009 @10:07AM (#29503437) Journal

              In a proprietary project if your boss says "do this" you either do it or find another job.

              Or you make excuses, pass the buck, and sponge off your colleagues until the next reorg.

            • Re:Problem (Score:5, Insightful)

              by DrgnDancer (137700) on Tuesday September 22, 2009 @10:47AM (#29503987) Homepage

              The same way people in raid guild do what they're supposed to in raids even though it's only a game and raid officers can't do anything to you really; or members of Civil Air Patrol follow military customs and courtesies toward their officers despite those officers having no actual UCMJ authority; or people in SCA listen to the nobles of their "Baronies" despite those people not having any real world authority. When you join a group or a project, you agree to abide by the rules of the group or project. If you eventually find that you can't, you generally either leave or are forced out. if the project lead on a properly managed project asks you to do some boring grunt work, you either do it or find a new project and someone else will be asked to do the work.

              If the project is generally fun or personally beneficial for you to work on, you'll do the grunt tasks you're asked to do, because otherwise you'll eventually be off the project. If the project wants to keep it's user base (and most do) it'll fix as many problems as it can to keep the users happy.

            • Re: (Score:3, Interesting)

              How does that work? In a proprietary project if your boss says "do this" you either do it or find another job

              You don't work in software, do you? I've worked at 5 different companies as a software engineer, and in all of these jobs I've never had my boss tell me to fix the crappy parts of the software I was assigned to work on. Actually in neither of them my boss even took the time to look at the code itself. It always was "[we | customer x ] needs [feature | bugfix] y within z [hours | weeks | days]. Make it happen."

            • Re: (Score:3, Insightful)

              by Ephemeriis (315124)

              How does that work? In a proprietary project if your boss says "do this" you either do it or find another job.

              Sure... You're given an assignment and you basically have to do it. But somewhere along the line somebody has to decide what is a priority and what isn't. Somebody decides what actually gets done. And it doesn't really matter if it's a proprietary project or not - stuff slips through the cracks.

              You think a company is going to drop everything to refactor some code just because it's getting a little long in the tooth? Even though everything works? You think a company is going to put a whole lot of time

          • Re:Problem (Score:5, Insightful)

            by renoX (11677) on Tuesday September 22, 2009 @09:51AM (#29503271)

            That's false of course:
            1) the deciding factor for project management is the non-commercial/commercial status of a project, not the closed/open state of the source.

            2) for non-commercial projects, both developers 'goodwill' and proper management are needed to avoid bloat; whereas for a commercial project only proper management is needed (as the management decides where the money will go).

            Note that the Linux kernel is a blend of non-commercial and commercial projects as many developers are paid to work on the Linux kernel and many aren't.

        • by Eevee (535658)
          When it's proprietary software, management will be too busy handing out assignments to add new sales fodder, excuse me, features to worry about actually doing anything proactive to improve the code base. Having a slimmed-down code base may be good in the long run, but doesn't do anything towards getting the next bonus.
          • Re:Problem (Score:5, Insightful)

            by bostei2008 (1441027) on Tuesday September 22, 2009 @09:27AM (#29503011)

            I agree.

            The people hating messes are the developers which have to look at this day by day. Cleaning up code is never something managers care about - its always driven by developers with a sense for order and simplicity.

            That means that Open Source software has a higher chance of getting cleaned up than propietary software, because there you have a higher percentage of truly motivated developers and no managers to bug them. Sigh...

    • Re:Problem (Score:5, Interesting)

      by RiotingPacifist (1228016) on Tuesday September 22, 2009 @09:15AM (#29502863)

      If only there was somebody at the top deciding what to let it/reject in such a way to keep the bloat out! While I am a linux/gpl fanboi, i think the bsd distros don't have this problem because they have much stricter people at the top of their kernels, and i think this is yet another sign that Linus should not be the only one running the show. If Linus isn't producing the kernel desktop users need (it's bloated, has the wrong scheduler, etc) then distros should step up and work around the problem GIT makes it very easy for them to start elsewhere, their previous release tree, mm tree, etc and add the patches they require!

      Before you jump at me and say that this will ruin Linux by duplicating work, it will still be the (essentially) same code that goes into the pool, its just the administration that changes, and producing incompatible distro's isn't a problem as the userspace API is fairly stable and changes to the ABI for prop drivers can be agreed on by the major players (or they can just follow linus's changes to them, or go crazy and stabilise the ABI so that the prop drivers work)

      • Re:Problem (Score:5, Informative)

        by TheRaven64 (641858) on Tuesday September 22, 2009 @09:25AM (#29502991) Journal
        Keeping the bloat out is not just about rejecting patches, it's about encouraging code reuse. In the BSD kernels, for example, the WiFi drivers are very small and all use the same code for everything that is not hardware-specific. I believe this is the case in Linux now, but for a while Intel had their own (almost) complete WiFi stack for their drivers and no one else used any of that code. This is a pretty endemic problem in Linux. It gets even worse when you stray a little way from x86, and find that everyone is implementing their own, incompatible, code for platform-specific features without realising that a lot of it ought to be shared everywhere above the very lowest layer.
        • Re:Problem (Score:5, Interesting)

          by jhol13 (1087781) on Tuesday September 22, 2009 @09:42AM (#29503167)

          Constant changes, i.e. lack of stable KBI (kernel binary interface) does not help.

          Eventually keeping your incompatible stack is easier than keeping up-to-date with latest and "greatest", especially if you happen to test your code.

      • Re:Problem (Score:5, Interesting)

        by WinterSolstice (223271) on Tuesday September 22, 2009 @09:32AM (#29503071)

        The BSD distros do not have this problem, but it's not just the strict top-down management.

        It's the users.

        Linux is trying to court three major user groups wih the exact same kernel, and trying to be all things to all people. The big corporations who make up most of the Linux coding/funding/purchasing want better server performance (more processors, more RAM, etc). The desktop guys want better desktop, laptop, and netbook experiences (3D graphics, sound cards, processor power scaling). The third are the end-users who contribute almost nothing but want the system to be easy and simple.

        BSD however, really only has one user base - and they largely want the same thing. Stability, security, and performance. So all the cute little desktop friendly stuff that Linux keeps adding and all the server-specific stuff that Linux keeps adding aren't there. There's just the one major direction.

        Or at least that's my experience, and I've been using it since 2.x.

    • Simple solution (Score:2, Insightful)

      by BhaKi (1316335)

      That is also the problem. Everyone adds pieces and eventually it starts to become a mess. Then someone else should fix it.

      Or we can just use an old version. Unlike to the case of proprietary software, we are not being forced to upgrade to "bloated mess".

      • Re:Simple solution (Score:4, Insightful)

        by coryking (104614) * on Tuesday September 22, 2009 @09:47AM (#29503229) Homepage Journal

        Clearly whoever modded you up has never tried what you are suggesting. I can only name a handfull of open source projects that backport security fixes to old versions and of those, they only backport to versions a few years old.

        In fact, I'd say the longest lived "old version" is probably Apache 1.3. The 2.x series has been out for, what, forever and yet they continue to push out fixes for 1.3 (last was Jan. 2008).

        I'd wager the biggest complaint I have with most open source is the a) dont understand what true stability means and as a result they b) rarely support old versions. It was one of the prime reasons I switched to FreeBSD. If I install FreeBSD 6.2 today, I know I'll get security fixes for at least a good half decade and probably a bit more if I track the 6.x series.

        Yeah yeah yeah, debian, yeah yeah... but dont get me started on the other reasons I switched (cough crappy docs, cough, crappy unstable kernel, cough

      • Re: (Score:3, Interesting)

        Erm actually its quite the opposite, windows XP got security patches for years, i doubt you'll find a safe 2.6.8 (~2004) kernel about. Even "slow" distros like debian only backport security fixes for 3 years after that you have to upgrade, or start maintaining your own kernel.

  • I've met the enemy (Score:2, Insightful)

    by Zarf (5735)

    I've met the enemy and they is us.

    • by ByOhTek (1181381)

      I see where you are coming from, but I'll offer that bloat isn't necessarily *bad*. Personally, I've thought of Linux as somewhat to rather bloated for 5 or 6 years.

      It just means there are a lot of available features. Many of which people need.

      Bloat isn't a problem. In software, it's in a lot of places because that's what you need many (but not all) cases that target a wide audience. The problems come in two flavors. 1) the inability for an individual to turn off the bits he or she doesn't need, and 2) lack

      • by Lumpy (12016) on Tuesday September 22, 2009 @10:13AM (#29503527) Homepage

        Problem is the "bloat" is in code only not in the running kernel.

        I can easily compile a linux kernel that runs in very little space on a super slow processor and it screams.

        Problem is the "bloat" that Linus is talking about is simply plain old kludgy coding done to get it out the door faster. Adding features need to stop and all kernel coders need to work on cleaning things up. It's the sucky part of the job that nobody wants to do, but it needs to be done. I've seen the insides of some kernel modules that will make your toes curl in fear as they are early prototypes pre-alphas at best.

      • by sumdumass (711423) on Tuesday September 22, 2009 @10:20AM (#29503615) Journal

        Bloat isn't a problem

        Until it causes system instability, slow performance, or increases the size of the code without adding any new features or fixing a problem. Bloat can become a problem, but it doesn't have to be. I thought I would just point that difference out because "isn't" seems to be an absolute which it shouldn't be.

  • Often the term bloated is misused meaning the speaker is at a point where he/she personally starts to find a technology confusing to wade through. Different people perceive different "bloat" points, so it's often relative. When it comes down to it, bloat is just software. As long as the pieces are loaded and run efficiently enough that the end-user, sysadmin, etc is happy then bloat is often a moot point and each person only needs to understand their own role and related facets of the software. We work as a
    • by natehoy (1608657) on Tuesday September 22, 2009 @09:13AM (#29502853) Journal

      Torvalds' use of the term "Bloated" in this case refers specifically to a loss of performance and an increase in size and memory usage, not of confusion.

      I think there are two (competing) goals for the Linux kernel as a whole (well, there are as many goals as there are developers, of course, so the two competing goals are more of a continuum).

      On one side, there is a desire for the Linux kernel to support more features so distros can be built to be more like popular mainstream operating systems like Windows and Mac. Ease-of-use, a pleasant user experience, separation/insulation from the dreaded Command Line, pretty graphics, massive hardware support, and support for more "oddball" configurations like multiple screens, etc. So it's desirable to have lots of driver support and lots of hooks into the operating system to support fancy stuff.

      On the other, there is a desire for Linux to be small, sleek, and fast, particularly for embedded projects.

      The former has been running the show for a while, and I think that's healthy and positive, but the kernel has gotten larger and slower at its basic job. For desktop users, this is good news since a lot of things that had to be done at "higher" levels can now be accomplished directly in the kernel, so they might actually have a faster user experience, and they've got resources to burn since most PCs are specced out for Windows, so Linux has a lot of spare growing room in that hardware.

      But for embedded/minimalist supporters, it means they need to add more hardware to their machines to support the now-larger kernel, chock full of features they'll never need or want.

      • Re: (Score:3, Insightful)

        by mcgrew (92797) *

        On one side, there is a desire for the Linux kernel to support more features so distros can be built to be more like popular mainstream operating systems like Windows and Mac. Ease-of-use, a pleasant user experience, separation/insulation from the dreaded Command Line, pretty graphics, massive hardware support, and support for more "oddball" configurations like multiple screens, etc

        I risk sounding like Stallman here, but in this case the distinction actually matters. We're discussing the kernel, not the OS.

    • Re: (Score:3, Insightful)

      by nomadic (141991)
      Often the term bloated is misused meaning the speaker is at a point where he/she personally starts to find a technology confusing to wade through.

      Linux today does not boot significantly faster than it did 15 years ago. That's bloat.
      • Re: (Score:3, Interesting)

        by MikeURL (890801)
        15 years ago you'd install linux and get a CLI, right? So you'd have a little blinking underline and that's it.

        Today you boot, with most distros, into a fully functional GUI with support for 100s of devices.

        You generally can't have both "unbloated" and "desktop ready" at the same time. About the only way to do that would be for the linux devs to first insist on a CLI and to also design the hardware from the transistors all the way up to the DLLs. A lot like Apple IF Apple booted to a CLI. Then you'
      • Re: (Score:3, Insightful)

        What? No, that's not the kernel. That is:

        > The BIOS - take a look at the LinuxBIOS or OpenBIOS work to see where that can be improved. But oh, my dear goodness, it can be improved.
        > Incredible masses of new hardware that do need detection and configuration at boot time. That's been a sore point: it takes time to scan for all that hardware, and you can optimize it by leaving out tools, but people do like having their network cards and USB drives and graphics tablets work automatically at boot time. Tha

        • Re: (Score:3, Interesting)

          by DavidTC (10147)

          init scripts especially are rather idiotic, and it's a testament to how much crap Windows is doing that Linux distros manage to load in roughly the same time.

          It's especially dumb when things that could start after the system has finished booting, like samba and ssh, instead start first.

          Likewise, driver detection. Um, no, you don't do that on startup, unless it's a first-time boot. You do that when the system is running, which means the very first time someone boots with that fancy new sound card the start

          • Re: (Score:3, Informative)

            I'm afraid that hardware detection may well be required, because critical services (such as NFS exports or MySQL) which rely on mounted partitions in most large-scale environments must have those directories already mounted before running 'exportfs' or before starting the relevant services, or they can create incredible chaos. And the flushing of /tmp/ is tricky: it's much safer to do at a well-defined init step, before the other services are running, and not potentially scrub weird components out from unde

  • by siddesu (698447) on Tuesday September 22, 2009 @08:59AM (#29502697)
    is finally having the last laugh? /dnrtfa
    • Re: (Score:3, Interesting)

      by metamatic (202216)

      Basically, my thoughts on seeing the headline were "No shit, Sherlock", followed by "I guess Andy Tanenbaum was right, eh Linus?"

      Linus's approach has always been "What the hell, throw it in the kernel". The result is that if you try running Linux on something like a Nokia N800 or N810, where there's only 128MB or 256MB of RAM, it crawls and thrashes even with the swap on flash memory.

      Meanwhile, Tanenbaum's MINIX requires 16MB of RAM [minix3.org]. Good luck getting any kind of Linux to load in that amount of space.

  • by dingen (958134) on Tuesday September 22, 2009 @09:02AM (#29502725)

    Of course nobody refers to Windows' kernel when people call it bloatware. Linus however is not talking about Linux as a distro or an operating system, it's just the kernel that's too bloated in his view. And with over 11 million lines of code, it's hardly even a flame.

    Now if only he had developed a microkernel instead...

    • by Viol8 (599362) on Tuesday September 22, 2009 @09:03AM (#29502747)

      "Now if only he had developed a microkernel instead..."

      It would be bloated AND slow.

      But hey, it would look pretty in a high level UML diagram.

  • by jarocho (1617799) on Tuesday September 22, 2009 @09:02AM (#29502729)
    However, Minix continues to maintain its girlish figure.
  • by eldavojohn (898314) * <[moc.liamg] [ta] [nhojovadle]> on Tuesday September 22, 2009 @09:02AM (#29502733) Journal
    I can't believe I'm relying on The Register for this but they have a few more quotes from him [theregister.co.uk]:

    Uh, I'd love to say we have a plan. I mean, sometimes it's a bit sad that we are definitely not the streamlined, small, hyper-efficient kernel that I envisioned 15 years ago...The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse.

    And also:

    He maintains, however, that stability is not a problem. "I think we've been pretty stable," he said. "We are finding the bugs as fast as we're adding them -- even though we're adding more code." Bottomley took this to mean that Torvalds views that the current level of integration acceptable under those terms. But Mr. Linux corrected him. "No. I'm not saying that," Torvalds answered. "Acceptable and avoidable are two different things. It's unacceptable but it's also probably unavoidable."

    I think that's very important to note. His quote by itself is very self-loathing but to add that tit's unavoidable really says a lot. You want to be popular? You have to satisfy more people and in doing so you become more bloated. He does maintain that Linux remains stable and that's usually the biggest problem I have with bloat. It decreases stability. I don't think there's any reason to get excited about level headed rational and reflection.

  • by Dystopian Rebel (714995) * on Tuesday September 22, 2009 @09:03AM (#29502739) Journal

    What "bloat" in software means to LT as the high priest of the kernel and what bloat means to me as a user are two different things.

    To a user, bloat means awkward, slow, inefficient, and needlessly large (if my storage space or bandwidth is limited). But these are all *perceived*. I don't perceive Linux to be bloated.

    In fact, I find *NIX with almost any window manager to be the most efficient computer OS I have ever used. Linux is the best of them, despite being a clone of the UNIX userland.

    If an OS can boot from a floppy or small USB key and be totally usable, it is certainly not bloatware. Rewrite the Linux userland in MONO or Java and then we'll talk about bloat.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Funny, I find Open Office to be bloated compared to MS Office.
      KDE/Gnome to be bloated to XP.

      That's why I use the best tools for me: MS Office and XP (in that order)

      It's not perfect, far from it, but works the best for me.
      KDE, Gnome, OO just feels like molasses everytime I try, and don't misunderstand:
      I've spent years under KDE, but given up on it every time after spending ungodly hours fixing what should work out of the box.
      OO has awful UI. I can't use it. Feels like a program from the early 90's which you

    • by Lord Ender (156273) on Tuesday September 22, 2009 @10:31AM (#29503749) Homepage

      Java is actually damn fast if you keep the JVM running at all times. Even wimpy mobile devices like the Kindle can run Java fine. The Kindle is just Linux + JVM on a puny ARM processor.

    • by SanityInAnarchy (655584) <ninja@slaphack.com> on Tuesday September 22, 2009 @12:09PM (#29505251) Journal

      It's mostly because Linus isn't talking about the "Linux" you're talking about -- that is, a whole Linux distribution, as compared to other OSes.

      He's talking about Linux itself, compared to what he thought it would be.

      Basically, the original plan for Linux was never to be an OS in its own right, but to be just another POSIX kernel, one highly-tuned for the then state-of-the-art 386 chip. Even porting to PowerPC was never part of the plan. The fact that this kernel is so flexible and featureful -- that it has drivers for damned-near everything, that it runs on everything from cell phones to mainframes, from set-top boxes to thousand-machine clusters, from wristwatches to... Yeah, all that portability necessarily makes it bigger than what would strictly be needed for one architecture and a limited set of hardware.

      It's also got to do with things like multiple schedulers, and it explains something of why Linus wanted one scheduler to rule them all -- the idea of pluggable schedulers is ludicrous, compared to the original idea of one kernel per platform, where you wouldn't have a Linux app, you'd have a Posix app that would run on Linux on x86, and on something entirely different on PPC, and yet another kernel on ARM. If it had been done that way, at least in theory, all of those kernels combined should've still been smaller than Linux currently is.

  • by rpp3po (641313) on Tuesday September 22, 2009 @09:05AM (#29502761)
    About two years ago I tested wether my Gentoo kernel was really faster. Disabling 3/4 of the options really just improved boot time and memory footprint, but not overall performance that much, at least far from 12%. Compared to a modularized kernel with just the stuff loaded, that was needed, the difference was negligible. I'm not sure if Torvalds is telling the truth about the reasons. To me it seems that the central, overall kernel architecture has degraded over time with regard to performance.
    • Re: (Score:3, Insightful)

      by OzPeter (195038)
      I always thought that building drivers into the kernel was going to be Linux's downfall. There is an un-ending supply of equipment that requires drivers and they can't all go into the kernel without some repercussions. Let alone being a black hole that continually sucks up stuff and never deletes it. This design may work well for a small system with limited hardware but is doomed to fail at some point when trying to scale it up for the real world.
  • by Chrisq (894406)
    I guess that we all need to decide. Do we want to run an OS that supports all sorts of peripherals, has libraries for applications developed in many languages and has many additions that are useful for a particular set of users? Or do we want an architecturally neat, clean, and lean OS. If we want the former we go with Linux or Windows. If you want the latter then Minix 3 is pretty neat.
  • bloat was ever inevitable, if anything it shows linux is fostering a vibrant development community. the thing that separates us from the MS bloat is that we can do something about ours quickly and easily. not all kernel hackers are master coders, so id speculate there is quite a bit of shoddy code (no offense) that can be streamlined by new members, or improved by the originals.
  • It has been a long time since I needed to compile my own kernel and modules, but I can't imagine things have changed that much over the years. Seems to me that when compiling the kernel, you can select out a LOT of hardware support and other options that aren't necessary for that particular installation. It would surprise me to find that the kernel still fits on a floppy disk though.

    • Re: (Score:3, Informative)

      by delt0r (999393)
      I still compile the kernel from time to time. Its not that different and the core kernel compiles quickly. But the modules take ages if everything is enabled. Generally you can disable more than 70% on any given system, then compile time is much faster. With the make -j2 thing on a dual core i wait less time with slackware 13.0 than I did with slackware 1.? on a 486. (can't remember the kernel numbers)
  • Pick two (Score:5, Insightful)

    by justthinkit (954982) <floyd@just-think-it.com> on Tuesday September 22, 2009 @09:17AM (#29502895) Homepage Journal
    (1) Large feature set
    (2) Compact/optimized
    (3) Fast to market

    Pick any two...
    • Re: (Score:3, Insightful)

      by adrianwn (1262452)

      (1) Large feature set (2) Compact/optimized (3) Fast to market Pick any two...

      ...and you'll get one. If you're lucky. And it won't be (3). And sometimes it won't be one of the two that you picked.

  • obvious (Score:5, Insightful)

    by walshy007 (906710) on Tuesday September 22, 2009 @09:17AM (#29502901)
    more hardware support and more functional tasks with scope creep means larger code base. nothing to see here, move along.
  • by hey! (33014) on Tuesday September 22, 2009 @09:25AM (#29502975) Homepage Journal

    This is like the salesman's nightmare, where you take the guy from engineering to visit the customer. Things are going great, the engineer can answer all the customer's questions.

    Then you realize, *the stupid bastard is answering the questions honestly*.

    Honesty is a basic requirement to be a halfway decent engineer. Persistent and incurable dissatisfaction with how you did the last job is another. Even if you *know* you did a great job, deep inside part of you knows you could have done it *better*.

  • What to do then? (Score:3, Interesting)

    by werfu (1487909) on Tuesday September 22, 2009 @09:29AM (#29503023)
    Then let's do like most other open source projects when they reach that point : Analyze current version, find good things and bads things, find possible improvement that were impossible because of breakage and legacy. Once the analysis process is complete, start version 3.0 from scratch, implement the new stuffs and improvements, then bring current features in one by one. And don't tell me it cant be done, it has been. And dont tell me it wouldn't be supported : how much time did it take before the 2.6 line has been adopted by industrials and missing critial distro?
  • by pilsner.urquell (734632) on Tuesday September 22, 2009 @09:29AM (#29503027)

    Bloated? Of course. Happens in every walk of life. It starts out lean and mean killing machine out of necessity, otherwise there is no success. Life is tough and to be other than at the top of efficiency is a death sentence.

    After achieving success then being fat and lazy is a luxury that is no longer fatal.

    This happens everywhere the jungle, in the business world, your job and governments. Evolution.

  • Another perspective (Score:5, Interesting)

    by sootman (158191) on Tuesday September 22, 2009 @10:08AM (#29503459) Homepage Journal

    Bloat isn't bad. [joelonsoftware.com]

    Version 5.0 of Microsoft's flagship spreadsheet program Excel came out in 1993. It was positively huge: it required a whole 15 megabytes of hard drive space. In those days we could still remember our first 20MB PC hard drives (around 1985) and so 15MB sure seemed like a lot... In 1993, given the cost of hard drives in those days, Microsoft Excel 5.0 took up about $36 worth of hard drive space. In 2000, given the cost of hard drives in 2000, Microsoft Excel 2000 takes up about $1.03 in hard drive space...

    In fact there are lots of great reasons for bloatware. For one, if programmers don't have to worry about how large their code is, they can ship it sooner. And that means you get more features, and features make your life better (when you use them) and don't usually hurt (when you don't). If your software vendor stops, before shipping, and spends two months squeezing the code down to make it 50% smaller, the net benefit to you is going to be imperceptible. Maybe, just maybe, if you tend to keep your hard drive full, that's one more Duran Duran MP3 you can download. But the loss to you of waiting an extra two months for the new version is perceptible, and the loss to the software company that has to give up two months of sales is even worse.

  • Microkernel (Score:5, Insightful)

    by bluefoxlucid (723572) on Tuesday September 22, 2009 @10:58AM (#29504153) Journal
    Next year he's going to claim that Minix was doing it right all along. We've seen a lot of Linusisms to that effect... $X needs to be outside the kernel... $Y shouldn't happen the way I've been screaming for years... I told $Z to fuck off because he's stupid but he was right and we need to go do that yesterday ... it's just how Linus is. He's an opinionated fat bastard, and then one day he realizes he's fucking wrong and just goes, "SHIT! Well let's do that then >:O"
  • by Baldrson (78598) * on Tuesday September 22, 2009 @11:06AM (#29504279) Homepage Journal
    Set up a prize competition for kernel compression similar to the Hutter Prize for Lossless Compression of Human Knowledge [hutter1.net] except the objective is the produce an executable binary of minimum size that expands into a fully functional kernel.

    The goal of this competition would be to obtain the optimal factoring of the kernel architecture.

  • Funny! (Score:3, Insightful)

    by hesaigo999ca (786966) on Tuesday September 22, 2009 @11:36AM (#29504765) Homepage Journal

    He should start a separate distro and call it leanux....not like we can't make do without another distro out there.

  • Poor Journalism (Score:3, Insightful)

    by DaMattster (977781) on Tuesday September 22, 2009 @11:40AM (#29504817)
    I did RTFA and I must say the article was poorly written - so much so that the author felt he needed to publish a correction that summarily states (what open source power users already know) that the Linux kernel can be "trimmed or fattened up." It is immaterial that Linux has gotten more bloated as the fundamental difference between it and Windows is that you as the consumer have the choice to "trim the fat." While I am an open source users, I am pragmatic and I believe it cannot be all things to all people and Windows has some advantages over Linux. For example, the choices of Linux can be downright bewildering and each distribution behaves differently with its own quirks. Windows is Windows. Even though distributions share a common kernel, they are really distinct OSes in their own right - applications run differently and have different behaviors. As Samba will tell you, sometimes compiling succeeds on three out four large distros. In theory, they should be all compatible.
  • by Animats (122034) on Tuesday September 22, 2009 @12:12PM (#29505287) Homepage

    Let's take a look at the patch history of QNX. [qnx.com] QNX is a message passing microkernel mostly used for embedded systems. But it can be run with a full GUI, runs on multiprocessors, and can be run as a server. Millions of "headless" embedded systems have QNX inside. I used it in a DARPA Grand Challenge vehicle. BigDog, the legged robot, runs QNX.

    Drivers are outside the kernel. All drivers. File systems are outside the kernel. Networking is outside the kernel. And they're all application programs, not some special kind of loadable kernel module.

    There have been 14 patches to QNX in the last two years. Only one is an actual kernel patch: "This patch contains updates to the PPCBE version of the SMP kernel. You need this patch only for Freescale MPC8641D boards." Only one is security-related: "This patch updates npm-tcpip-v6.so to fix a Denial of Service vulnerability where receipt of a specially crafted network packet forces the io-net network manager to fault (terminate)." Neither Linux nor Windows comes close to that record.

    There's little "churn" in a good microkernel. Since little code is going in, new bugs aren't going in. Good microkernels tend to slowly converge toward a zero-bugs state.

    QNX generally has a "there's only one way to do it" approach, like Python. Linux supports three completely different driver placement - compiled into the kernel, loadable as a kernel module at boot time, and run as a user process. QNX only supports one - run as a user process "resource manager". That simplifies things. A "one way to do it" approach means that the one best way is thoroughly exercised and tested. There are few seldom-used dark corners in critical code.

    When QNX boots, it brings in an image with the kernel, a built-in process called "proc", any programs built into the boot image, and any shared objects ".so" wanted at boot. These last two run entirely in user space; they're just put in the boot image so they're there at startup. That's how drivers needed at startup get loaded. They don't have to be in the kernel. (In fact, you can put the whole boot image in ROM, and many embedded systems do this.)

    A QNX "resource manager" is a program which has registered to receive messages for a certain portion of pathname space. The QNX kernel has no file systems; part of the initial "proc" process is a little program which keeps an in-memory table of "resource managers" and what part of pathname space they manage. This is similar to "mounting" a driver under Linux, but it doesn't require a file system up during boot. File systems are user programs which start up and ask for some pathname space, after which "open" messages are directed to that file system.

    Another QNX simplification is that the kernel doesn't load programs. "exec" is implemented by a shared library. That library is loaded with the boot image, to allow things to start up. "exec" runs entirely in user space, with no special privileges, so if there's a bug in "exec" vulnerable to a mis-constructed executable, that program load fails and everything else goes on normally.

    The price paid for this is some extra copying, since all I/O is done by message passing. This isn't much of a cost any more, because you're almost always copying from cache to cache. That's an important point. Message passing kernels used to be seen as expensive due to copying cost. But today, copying recently used material is cheap. On the other hand, some early microkernels (Mach comes to mind) worked very hard to mess with the MMU to avoid big copies, moving blocks from one address space to another by changing the MMU. This seems to be a lose on modern CPUs; the cache flushing required when you mess with the address space on recently used data hurts performance.

    I used to pump uncompressed video through QNX message passing using 2% of a Pentium III class CPU. Message passing, done right, is not a major performance problem.

Never make anything simple and efficient when a way can be found to make it complex and wonderful.

Working...