Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux

According to Linus, Linux Is "Bloated" 639

mjasay writes "Linus Torvalds, founder of the Linux kernel, made a somewhat surprising comment at LinuxCon in Portland, Ore., on Monday: 'Linux is bloated.' While the open-source community has long pointed the finger at Microsoft's Windows as bloated, it appears that with success has come added heft, heft that makes Linux 'huge and scary now,' according to Torvalds." TuxRadar provides a small capsule of his remarks as well, as does The Register.
This discussion has been archived. No new comments can be posted.

According to Linus, Linux Is "Bloated"

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn@gma[ ]com ['il.' in gap]> on Tuesday September 22, 2009 @09:02AM (#29502733) Journal
    I can't believe I'm relying on The Register for this but they have a few more quotes from him [theregister.co.uk]:

    Uh, I'd love to say we have a plan. I mean, sometimes it's a bit sad that we are definitely not the streamlined, small, hyper-efficient kernel that I envisioned 15 years ago...The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse.

    And also:

    He maintains, however, that stability is not a problem. "I think we've been pretty stable," he said. "We are finding the bugs as fast as we're adding them -- even though we're adding more code." Bottomley took this to mean that Torvalds views that the current level of integration acceptable under those terms. But Mr. Linux corrected him. "No. I'm not saying that," Torvalds answered. "Acceptable and avoidable are two different things. It's unacceptable but it's also probably unavoidable."

    I think that's very important to note. His quote by itself is very self-loathing but to add that tit's unavoidable really says a lot. You want to be popular? You have to satisfy more people and in doing so you become more bloated. He does maintain that Linux remains stable and that's usually the biggest problem I have with bloat. It decreases stability. I don't think there's any reason to get excited about level headed rational and reflection.

  • by rpp3po ( 641313 ) on Tuesday September 22, 2009 @09:05AM (#29502761)
    About two years ago I tested wether my Gentoo kernel was really faster. Disabling 3/4 of the options really just improved boot time and memory footprint, but not overall performance that much, at least far from 12%. Compared to a modularized kernel with just the stuff loaded, that was needed, the difference was negligible. I'm not sure if Torvalds is telling the truth about the reasons. To me it seems that the central, overall kernel architecture has degraded over time with regard to performance.
  • Re:Problem (Score:5, Interesting)

    by RiotingPacifist ( 1228016 ) on Tuesday September 22, 2009 @09:15AM (#29502863)

    If only there was somebody at the top deciding what to let it/reject in such a way to keep the bloat out! While I am a linux/gpl fanboi, i think the bsd distros don't have this problem because they have much stricter people at the top of their kernels, and i think this is yet another sign that Linus should not be the only one running the show. If Linus isn't producing the kernel desktop users need (it's bloated, has the wrong scheduler, etc) then distros should step up and work around the problem GIT makes it very easy for them to start elsewhere, their previous release tree, mm tree, etc and add the patches they require!

    Before you jump at me and say that this will ruin Linux by duplicating work, it will still be the (essentially) same code that goes into the pool, its just the administration that changes, and producing incompatible distro's isn't a problem as the userspace API is fairly stable and changes to the ABI for prop drivers can be agreed on by the major players (or they can just follow linus's changes to them, or go crazy and stabilise the ABI so that the prop drivers work)

  • by OzPeter ( 195038 ) on Tuesday September 22, 2009 @09:16AM (#29502877)
    You mean like QNX is slow?
  • What to do then? (Score:3, Interesting)

    by werfu ( 1487909 ) on Tuesday September 22, 2009 @09:29AM (#29503023)
    Then let's do like most other open source projects when they reach that point : Analyze current version, find good things and bads things, find possible improvement that were impossible because of breakage and legacy. Once the analysis process is complete, start version 3.0 from scratch, implement the new stuffs and improvements, then bring current features in one by one. And don't tell me it cant be done, it has been. And dont tell me it wouldn't be supported : how much time did it take before the 2.6 line has been adopted by industrials and missing critial distro?
  • by pilsner.urquell ( 734632 ) on Tuesday September 22, 2009 @09:29AM (#29503027)

    Bloated? Of course. Happens in every walk of life. It starts out lean and mean killing machine out of necessity, otherwise there is no success. Life is tough and to be other than at the top of efficiency is a death sentence.

    After achieving success then being fat and lazy is a luxury that is no longer fatal.

    This happens everywhere the jungle, in the business world, your job and governments. Evolution.

  • Re:Problem (Score:5, Interesting)

    by WinterSolstice ( 223271 ) on Tuesday September 22, 2009 @09:32AM (#29503071)

    The BSD distros do not have this problem, but it's not just the strict top-down management.

    It's the users.

    Linux is trying to court three major user groups wih the exact same kernel, and trying to be all things to all people. The big corporations who make up most of the Linux coding/funding/purchasing want better server performance (more processors, more RAM, etc). The desktop guys want better desktop, laptop, and netbook experiences (3D graphics, sound cards, processor power scaling). The third are the end-users who contribute almost nothing but want the system to be easy and simple.

    BSD however, really only has one user base - and they largely want the same thing. Stability, security, and performance. So all the cute little desktop friendly stuff that Linux keeps adding and all the server-specific stuff that Linux keeps adding aren't there. There's just the one major direction.

    Or at least that's my experience, and I've been using it since 2.x.

  • by Anonymous Coward on Tuesday September 22, 2009 @09:36AM (#29503101)

    I think the GPs concern is not about performance but about maintainability. Being a module doesn't really affect that. When the driver API changes every driver has to be changed. The more drivers the more work has to be done. What adds to this problem is that these APIs really do change in Linux.

  • Re:Translation: (Score:2, Interesting)

    by Anonymous Coward on Tuesday September 22, 2009 @09:41AM (#29503155)

    I think people are somewhat missing the big picture here. This is evolution in action. As time goes on and people need new things they get added. It's hard to say when to get rid of old vestigial features, so it doesn't get done. This leads to bloat. Eventually this will be corrected as the obsolete stuff gets more obviously unused. It's a problem, but unlike the dinosaurs, Linux will adapt due to the extreme ease of digital replication. If it ever gets so bad as to be unsustainable, someone will either use an old version, recompile or fork and get rid of the parts they will never use. You can't really do that with anything that you don't have source to. Of course the typical end user like me will have to wait for somebody to do this, but luckily my computer is fast enough that I haven't really noticed yet.

  • Re:Problem (Score:5, Interesting)

    by jhol13 ( 1087781 ) on Tuesday September 22, 2009 @09:42AM (#29503167)

    Constant changes, i.e. lack of stable KBI (kernel binary interface) does not help.

    Eventually keeping your incompatible stack is easier than keeping up-to-date with latest and "greatest", especially if you happen to test your code.

  • Re:Problem (Score:4, Interesting)

    by Galactic Dominator ( 944134 ) on Tuesday September 22, 2009 @09:43AM (#29503189)

    In FreeBSD, you chose to accept a project. If you fail to perform, you are replaced with another volunteer. It doesn't matter if you're a core committer or a port maintainer, it all works that way. There are occasional problems but overall a successful approach. Many other opensource projects do the same. That's why hierarchies work in opensource--they hold people accountable just like in a proprietary project.

  • Re:Problem (Score:5, Interesting)

    by oiron ( 697563 ) on Tuesday September 22, 2009 @09:52AM (#29503283) Homepage

    It gets done because ultimately somebody says "Fuck this, I can't work on this bloated codebase any longer. We're refactoring, guys!"

    Then, if the old lead dev / maintainer / admin doesn't like it, a fork happens...

    Projects where this has happened before: The kernel itself, several times (as well as various subsystems, again several times), X (XFree to XOrg), KDE (2-3, 3-4), Amarok (1.x to 2.x), SodiPodi -> Inkscape, Firefox from 2 to 3... These are off the top of my mind, of course - there are lots more.

    Of course, there are some cases where this process has failed. I don't think the failure rate is any higher (or lower) than proprietary projects, though...

    The incentives are different, but they exist, nevertheless...

  • Re:Problem (Score:2, Interesting)

    by Anonymous Coward on Tuesday September 22, 2009 @10:01AM (#29503371)

    it's about encouraging code reuse

    That is part of it. However the real issue is performance '12% performance decline'. That makes it seem 'bloated'. Just going in and willy nilly merging code together will not magically make it faster or less bloated.

    It needs to be measured to see where the bottlenecks are. Attack THOSE areas first. As an old boss of mine like to put it 'if you didnt measure it before and after you didnt change a thing and you do not know if it is better or worse'. Measure time and figure out what the real problems are. Is it death by 1000 cuts? Is it some 20 line hot spots?

    Code reuse is admirable but only goes so far. It does however make it easier to find what is wrong when performance goes to hell. It has other 'helping' benefits too. But just going off blindly and just merging stuff up will not make the problem better it could even make it worse. I have been keeping an eye on the BOCHS project (one of the slower emulators out there). They turned the problem on its head and did the exact opposite and started breaking things apart. They have over the last few releases got a 30% speed increase. How did they know it worked? They tested and measured. Architecture is not about pretty code (you get that if you do it right), it is about knowing what your platform (and that includes both hardware and compiler) does with an if condition or a for loop or a cache miss.

    They are letting more and more in to get more drivers (we wanted it and got it). The downside is code bloat. Not properly abstracting the idea. Its not as fun...

  • Another perspective (Score:5, Interesting)

    by sootman ( 158191 ) on Tuesday September 22, 2009 @10:08AM (#29503459) Homepage Journal

    Bloat isn't bad. [joelonsoftware.com]

    Version 5.0 of Microsoft's flagship spreadsheet program Excel came out in 1993. It was positively huge: it required a whole 15 megabytes of hard drive space. In those days we could still remember our first 20MB PC hard drives (around 1985) and so 15MB sure seemed like a lot... In 1993, given the cost of hard drives in those days, Microsoft Excel 5.0 took up about $36 worth of hard drive space. In 2000, given the cost of hard drives in 2000, Microsoft Excel 2000 takes up about $1.03 in hard drive space...

    In fact there are lots of great reasons for bloatware. For one, if programmers don't have to worry about how large their code is, they can ship it sooner. And that means you get more features, and features make your life better (when you use them) and don't usually hurt (when you don't). If your software vendor stops, before shipping, and spends two months squeezing the code down to make it 50% smaller, the net benefit to you is going to be imperceptible. Maybe, just maybe, if you tend to keep your hard drive full, that's one more Duran Duran MP3 you can download. But the loss to you of waiting an extra two months for the new version is perceptible, and the loss to the software company that has to give up two months of sales is even worse.

  • Re:Problem (Score:4, Interesting)

    by amplt1337 ( 707922 ) on Tuesday September 22, 2009 @10:10AM (#29503485) Journal

    I'm not understanding something. If BSD lacks both the "desktop friendly stuff" and the "server-specific stuff"... then what exactly are BSD-based systems doing so stably, securely, and speedily?

  • by Lord Ender ( 156273 ) on Tuesday September 22, 2009 @10:31AM (#29503749) Homepage

    Java is actually damn fast if you keep the JVM running at all times. Even wimpy mobile devices like the Kindle can run Java fine. The Kindle is just Linux + JVM on a puny ARM processor.

  • Re:Simple solution (Score:3, Interesting)

    by RiotingPacifist ( 1228016 ) on Tuesday September 22, 2009 @10:31AM (#29503755)

    Erm actually its quite the opposite, windows XP got security patches for years, i doubt you'll find a safe 2.6.8 (~2004) kernel about. Even "slow" distros like debian only backport security fixes for 3 years after that you have to upgrade, or start maintaining your own kernel.

  • by psyklopz ( 412711 ) on Tuesday September 22, 2009 @10:33AM (#29503781)

    I would like to believe this, but it hasn't been my experience.

    I can tell you this: Vista (!!!) appears to run smoother and with a more-responsive UI on my laptop than when I try a default ubuntu install on the thing (for example, flash just crawls when I am viewing it thru firefox in ubuntu).

    It has been my experience in the past that every time I install linux, it runs slower (or at least appears to run slower) than the windows install on the same machine.

    I'm not trying to troll. Maybe someone could explain this phenomenon to me. I actually *want* to switch, but I can't if the alternative is providing a degraded experience.

  • Re:Problem (Score:5, Interesting)

    by quanticle ( 843097 ) on Tuesday September 22, 2009 @10:52AM (#29504057) Homepage

    Precisely. The grandparent is forgetting that, in the proprietary world, the scenario you described can't happen. I can't go to my boss and tell him, "Screw this, I'm going to spend the next month refactoring our messy code, rather than adding new functionality." However, I can do that in an open-source project.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Tuesday September 22, 2009 @10:52AM (#29504059)
    Comment removed based on user account deletion
  • by Baldrson ( 78598 ) * on Tuesday September 22, 2009 @11:06AM (#29504279) Homepage Journal
    Set up a prize competition for kernel compression similar to the Hutter Prize for Lossless Compression of Human Knowledge [hutter1.net] except the objective is the produce an executable binary of minimum size that expands into a fully functional kernel.

    The goal of this competition would be to obtain the optimal factoring of the kernel architecture.

  • some distros... (Score:2, Interesting)

    by Anonymous Coward on Tuesday September 22, 2009 @11:15AM (#29504421)

    I think this applies to some distros. For example, Ubuntu comes with Pulseaudio. It is of little or no use to the average person. The average Joe isn't going to stream audio over his LAN or combine two stereo sound cards to make one surround sound card. Pulseaudio creates sound lag, which is why I always remove it when I do an install.

    I like a system with the minimal amount of daemons running that are required for operation.

  • by metamatic ( 202216 ) on Tuesday September 22, 2009 @11:26AM (#29504587) Homepage Journal

    Basically, my thoughts on seeing the headline were "No shit, Sherlock", followed by "I guess Andy Tanenbaum was right, eh Linus?"

    Linus's approach has always been "What the hell, throw it in the kernel". The result is that if you try running Linux on something like a Nokia N800 or N810, where there's only 128MB or 256MB of RAM, it crawls and thrashes even with the swap on flash memory.

    Meanwhile, Tanenbaum's MINIX requires 16MB of RAM [minix3.org]. Good luck getting any kind of Linux to load in that amount of space.

  • by Hatta ( 162192 ) * on Tuesday September 22, 2009 @11:26AM (#29504595) Journal

    The biggest problem with that is USB devices. Who knows what weirdo USB hardware you're going to want to plug into your computer in the next couple years. Using the stock Debian kernel, that's something I really don't need to worry about.

  • by kayoshiii ( 1099149 ) on Tuesday September 22, 2009 @11:29AM (#29504643)
    The problem in this case is that Flash is something of a Dog on Linux....
    also In many cases video drivers are not as well optimised either.

    Over the years I have found that Windows feels snappier on an unloaded system whereas Linux generally tends to feel better on a machine under heavy load.
  • Re:Problem (Score:3, Interesting)

    by John Betonschaar ( 178617 ) on Tuesday September 22, 2009 @12:00PM (#29505119)

    How does that work? In a proprietary project if your boss says "do this" you either do it or find another job

    You don't work in software, do you? I've worked at 5 different companies as a software engineer, and in all of these jobs I've never had my boss tell me to fix the crappy parts of the software I was assigned to work on. Actually in neither of them my boss even took the time to look at the code itself. It always was "[we | customer x ] needs [feature | bugfix] y within z [hours | weeks | days]. Make it happen."

  • by DavidTC ( 10147 ) <slas45dxsvadiv.v ... m ['x.c' in gap]> on Tuesday September 22, 2009 @12:04PM (#29505171) Homepage

    init scripts especially are rather idiotic, and it's a testament to how much crap Windows is doing that Linux distros manage to load in roughly the same time.

    It's especially dumb when things that could start after the system has finished booting, like samba and ssh, instead start first.

    Likewise, driver detection. Um, no, you don't do that on startup, unless it's a first-time boot. You do that when the system is running, which means the very first time someone boots with that fancy new sound card the startup sound isn't going to come out it...but the other sounds should. That tiny tradeoff saved 15 seconds every boot.

    And even just crap like cleaning out /tmp and remounting network drives and CD-ROMs and etc. That's background stuff.

    That, right there, is the problem. For some totally unknown reason, Linux distributions have no 'deferred startup' script area. You either get run on startup, and everyone waits for you, or get stuck in cron when you only need to run once.

    Or, hell, some sort of dependency based system, where you list what services you want up as fast a possible (On most desktops this would be X Windows and Gnome/KDE, but for servers it might be mysql and Apache, or postfix and courier, or whatever.) and each service has a list of things it needs. And then you should also list services that should come up when the system has finished that, in a non-time critical manner.

    Redhat tries at this, but fails. Debian doesn't even try.

    I know, I've working on a Ubuntu XBMC box. I would really like XBMC to startup and then have ssh and samba, and, hell, the virtual consoles start later. I've about given up on this concept, though.

    It doesn't help that no distinctions seem to be made between 'these scripts must execute, at startup, in this order, to create a functioning and mounted system', and scripts that run later that are just services. Yes, there's rcS.d vs. rc2.d (or whatever), but for some reason, non-required services sometimes get put in rcS.d, probably because no one's ever bothered to set hard and fast rules what they mean by 'system'. (Hint...a sound daemon is not required to have a working system. Hardware detection is not required to have a working system. Mounting the fucking /dev partition is required.)

    Of course, Linus can't do anything about all this except frown at the distro people.

  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Tuesday September 22, 2009 @12:09PM (#29505251) Journal

    It's mostly because Linus isn't talking about the "Linux" you're talking about -- that is, a whole Linux distribution, as compared to other OSes.

    He's talking about Linux itself, compared to what he thought it would be.

    Basically, the original plan for Linux was never to be an OS in its own right, but to be just another POSIX kernel, one highly-tuned for the then state-of-the-art 386 chip. Even porting to PowerPC was never part of the plan. The fact that this kernel is so flexible and featureful -- that it has drivers for damned-near everything, that it runs on everything from cell phones to mainframes, from set-top boxes to thousand-machine clusters, from wristwatches to... Yeah, all that portability necessarily makes it bigger than what would strictly be needed for one architecture and a limited set of hardware.

    It's also got to do with things like multiple schedulers, and it explains something of why Linus wanted one scheduler to rule them all -- the idea of pluggable schedulers is ludicrous, compared to the original idea of one kernel per platform, where you wouldn't have a Linux app, you'd have a Posix app that would run on Linux on x86, and on something entirely different on PPC, and yet another kernel on ARM. If it had been done that way, at least in theory, all of those kernels combined should've still been smaller than Linux currently is.

  • by Animats ( 122034 ) on Tuesday September 22, 2009 @12:12PM (#29505287) Homepage

    Let's take a look at the patch history of QNX. [qnx.com] QNX is a message passing microkernel mostly used for embedded systems. But it can be run with a full GUI, runs on multiprocessors, and can be run as a server. Millions of "headless" embedded systems have QNX inside. I used it in a DARPA Grand Challenge vehicle. BigDog, the legged robot, runs QNX.

    Drivers are outside the kernel. All drivers. File systems are outside the kernel. Networking is outside the kernel. And they're all application programs, not some special kind of loadable kernel module.

    There have been 14 patches to QNX in the last two years. Only one is an actual kernel patch: "This patch contains updates to the PPCBE version of the SMP kernel. You need this patch only for Freescale MPC8641D boards." Only one is security-related: "This patch updates npm-tcpip-v6.so to fix a Denial of Service vulnerability where receipt of a specially crafted network packet forces the io-net network manager to fault (terminate)." Neither Linux nor Windows comes close to that record.

    There's little "churn" in a good microkernel. Since little code is going in, new bugs aren't going in. Good microkernels tend to slowly converge toward a zero-bugs state.

    QNX generally has a "there's only one way to do it" approach, like Python. Linux supports three completely different driver placement - compiled into the kernel, loadable as a kernel module at boot time, and run as a user process. QNX only supports one - run as a user process "resource manager". That simplifies things. A "one way to do it" approach means that the one best way is thoroughly exercised and tested. There are few seldom-used dark corners in critical code.

    When QNX boots, it brings in an image with the kernel, a built-in process called "proc", any programs built into the boot image, and any shared objects ".so" wanted at boot. These last two run entirely in user space; they're just put in the boot image so they're there at startup. That's how drivers needed at startup get loaded. They don't have to be in the kernel. (In fact, you can put the whole boot image in ROM, and many embedded systems do this.)

    A QNX "resource manager" is a program which has registered to receive messages for a certain portion of pathname space. The QNX kernel has no file systems; part of the initial "proc" process is a little program which keeps an in-memory table of "resource managers" and what part of pathname space they manage. This is similar to "mounting" a driver under Linux, but it doesn't require a file system up during boot. File systems are user programs which start up and ask for some pathname space, after which "open" messages are directed to that file system.

    Another QNX simplification is that the kernel doesn't load programs. "exec" is implemented by a shared library. That library is loaded with the boot image, to allow things to start up. "exec" runs entirely in user space, with no special privileges, so if there's a bug in "exec" vulnerable to a mis-constructed executable, that program load fails and everything else goes on normally.

    The price paid for this is some extra copying, since all I/O is done by message passing. This isn't much of a cost any more, because you're almost always copying from cache to cache. That's an important point. Message passing kernels used to be seen as expensive due to copying cost. But today, copying recently used material is cheap. On the other hand, some early microkernels (Mach comes to mind) worked very hard to mess with the MMU to avoid big copies, moving blocks from one address space to another by changing the MMU. This seems to be a lose on modern CPUs; the cache flushing required when you mess with the address space on recently used data hurts performance.

    I used to pump uncompressed video through QNX message passing using 2% of a Pentium III class CPU. Message passing, done right, is not a major performance problem.

  • Re:Problem (Score:3, Interesting)

    by master5o1 ( 1068594 ) on Tuesday September 22, 2009 @07:55PM (#29510251) Homepage
    So really:

    properly managed (volunteer) open source projects deal with this appropriately, some do not.

    I say "(volunteer)" because I think I recall some open source projects having only certain contributors, as opposed to anyone.

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...