Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Debian Software Linux

Is Ubuntu Getting Slower? 544

An anonymous reader writes "Phoronix has a new article where they provide Ubuntu 7.04, 7.10, 8.04, and 8.10 benchmarks and had ran many tests. In that article, when using an Intel notebook they witness major slowdowns in different areas and ask the question, Is Ubuntu getting slower? From the article: 'A number of significant kernel changes had went on between these Ubuntu Linux releases including the Completely Fair Scheduler, the SLUB allocator, tickless kernel support, etc. We had also repeated many of these tests to confirm we were not experiencing a performance fluke or other issue (even though the Phoronix Test Suite carries out each test in a completely automated and repeatable fashion) but nothing had changed. Ubuntu 7.04 was certainly the Feisty Fawn for performance, but based upon these results perhaps it would be better to call Ubuntu 7.10 the Gooey Gibbon, 8.04 the Hungover Heron, and 8.10 the Idling Ibex.'"
This discussion has been archived. No new comments can be posted.

Is Ubuntu Getting Slower?

Comments Filter:
  • by Anonymous Coward on Monday October 27, 2008 @08:37AM (#25525459)

    Let's see how that statement works in this situation:

    It shouldn't be getting slower, but then again, performance isn't the reason Vista exists.

    If you really want performance, run FreeDOS. Otherwise, shut up and get used to progress.

  • Re:What hardware? (Score:1, Interesting)

    by Spazztastic ( 814296 ) <spazztastic@gmEE ... inus threevowels> on Monday October 27, 2008 @08:43AM (#25525507)
    Mod parent up. This article is flamebait.

    For once I agree with a Gentoo/Slackware zealot who posted "This is what happens when you put out pre-compiled kernels, it makes it too easy for stupid people." And this is how questions like this arise. If you want performance, compile your own kernel with only your optimizations, then come back to us.
  • by Anonymous Coward on Monday October 27, 2008 @08:44AM (#25525517)

    Complete and utter crap.

    Most likely the performance decrease is to do with some unoptimized kernel feature. That kernel feature should be identified and optimized. Linux actually has a nice history of maintaining status quo or getting faster between releases. Atleast when you track it over say 10 releases.

    I don't understand this, quite frankly, Windows user mentality of just accepting the state of things.

    I would love to see Phoronix do a retest with some of the major patchsets removed and see if they can find the one or ones that cause performance decreases.

  • by Peter Desnoyers ( 11115 ) on Monday October 27, 2008 @08:52AM (#25525597) Homepage

    If you look closely you'll notice that (a) the benchmarks were run on a Thinkpad T60 laptop, and (b) there were significant differences on some benchmarks like RAM bandwidth that should have little or no OS components.

    This sounds to me like the power management was dialing down the CPU on the later releases...

  • by pipatron ( 966506 ) <pipatron@gmail.com> on Monday October 27, 2008 @08:52AM (#25525603) Homepage
    It's more likely the Ubuntu/GNOME moving all apps to run in mono. I doubt the kernel have anything to do with this.
  • by mlwmohawk ( 801821 ) on Monday October 27, 2008 @08:54AM (#25525617)

    Some of the benchmarks were hardware testing, and those showed variation. They should not, unless the compiler changed the algorithms used to compile the code between distros.

    Benchmarking a multi-tasking system like Linux is a tough thing to quantify. The Linux kernel recently had a big scheduler change, this alone could account for shifting benhmark numbers. It may not actually "slowing down," but running multiple programs more evenly. The effective work is the same or better, which would mean "faster," but an almost useless benchmark would look slower.

  • by paultag ( 1284116 ) on Monday October 27, 2008 @08:58AM (#25525673) Homepage
    I may be a bit biased here. Full disclosure: I am an Ubuntu Member, User, Abuser. I think that Ubuntu is one hell of a Distro, and GNU/Linux is one hell of an OS. Ubuntu, however is not geared for the market where we squeeze every CPU cycle we can. For that you have to do a _LOT_ of cleaning, replace your kernel to something a bit more fit for a server environment. Ubuntu is, and will always be a distro that is Easy to use first, even if that comes at the expense of some kruft. Each distro is becoming more bloated, but one great feature in Ibex ( 8.10 ) is the "System Cleaner" ( for all you GNOME users, Applications --> System Tools --> System Cleaner ) that checks for unused packages. This may not be a whole lot, but before bashing speed, or claiming its fat, take a hard look at the distro really.
  • Re:What hardware? (Score:2, Interesting)

    by Anonymous Coward on Monday October 27, 2008 @09:11AM (#25525783)
    CPU intensive tasks like LAME should be affected that much by the OS. It's all CPU driven. So it's obvious there are architectural differences that actually hinder regular application performance. I don't consider that an improvement in any way.
  • Re:Maybe (Score:1, Interesting)

    by Anonymous Coward on Monday October 27, 2008 @09:12AM (#25525789)
    Bullshit. It might resume from standby that fast, but I have a D420 here and it takes that long just to post up and hit the bootloader.
  • by trumplestone ( 820102 ) on Monday October 27, 2008 @09:15AM (#25525829)

    Mod parent up.

    Many of the benchmarks (such as the lame, Java and RAM bandwidth benchmarks) are CPU-bound, and will run the majority of time in userspace. As the kernel should only be invoked for timer ticks, interrupts, TLB misses, etc (which would probably account for less than 1% of the CPU time), and change to the kernel should have minimal impact on the benchmarks.

    The parent's comment that power settings have been misconfigured sounds spot-on.

  • by Sockatume ( 732728 ) on Monday October 27, 2008 @09:16AM (#25525835)
    It's not quite that simple. Performance in Java and media encoding was almost halved in the two newest versus the older versions of the OS. It's hard to imagine why that would be the case unless "more features" in Heron and Ibex are using up half the CPU time (and based on the other benchmarks, they ain't). I'd suspect test methodology rather than some oddity of OS performance but it's still something that needs to be addressed.
  • by Anonymous Coward on Monday October 27, 2008 @09:16AM (#25525841)

    ... they are.

    Seriously.

    I can see several problems with the testing methodology as is:

        * The test suite itself: The Phoronix test suite runs on PHP. That in itself is a problem-- the slowdowns measured could most likely be *because* of differences in the distributed PHP runtimes. You can't just say "hey, version Y of distro X is slower than version Z! LOLZ" because, WTF. You're pretty much also running different versions of the *test suite* itself (since you have to consider the runtime as part of the test suite). Unless you remove that dependency, then sorry, you can't measure things reliably. Which brings me to my second point...

        * What exactly are they testing? The whole distro? The compiler (since most of the whole of each distro version is compiled with different versions of GCC)? The kernel? If they're testing the released kernel, then they should run static binaries that *test* the above, comparing kernel differences. If they're testing the compiler, then they should build the *same* compiled code on each version and run said compiled code (which is pretty much what I gather they're doing). If they're testing the utilities and apps that came with the distro, then they should have shell scripts and other tools (which run on a single runtime, not depending on the runtime(s) that came with the distro version). Because if you don't, you have no fucking clue what you're testing.

    Honestly, I was unimpressed by the benchmarks. I happen to do performance benchmarking as part of my job, and I can tell you, you have to eliminate all the variables first -- isolate things to be able to say "X is slow". If you rely on a PHP runtime, use *exactly* the same PHP runtime for all your testing; otherwise, you'll get misleading results.

  • by dintech ( 998802 ) on Monday October 27, 2008 @09:18AM (#25525885)
    A project gets late one day at a time. There's probably a similar proverb for this too.
  • I think that's part of the point, yes? The concern is that for whatever reason - whether it be the new scheduler system in the kernel or the bloatware that Ubuntu includes, performance is degrading.

    Good follow up would be to figure out specifically where - is it due to kernel changes? Then the problem may not be Ubuntu...

  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Monday October 27, 2008 @09:35AM (#25526173)
    Comment removed based on user account deletion
  • by LWATCDR ( 28044 ) on Monday October 27, 2008 @09:48AM (#25526381) Homepage Journal

    I would like to see if this is an Ubuntu issue or Linux in general.
    What about Fedora, OpenSuse, and Debian? How do they compare to Ubuntu?

  • by Blice ( 1208832 ) <Lifes@Alrig.ht> on Monday October 27, 2008 @09:48AM (#25526385)
    Ubuntu isn't really suited for low end machines anymore, IMO. Has to be at least 2ghz, with at least 512mb RAM.. Anything lower and it's going to be pretty slow.

    I'm sure some people here are going to be comparing Ubuntu to Vista in regards of getting slower with release, but because it's Linux we have a few more choices.

    They make distros that are meant to be lightweight- Anyone with a machine that's a little old, I urge you to try one of them. You'll be pleasantly surprised and maybe find a new favorite window manager/desktop environment in the process. Before you start talking about how joe sixpack doesn't want to try another distro or learn anything about Linux- I'm not talking to Joe sixpack here, I'm talking to you, a slashdot lurker.

    Try one of these distros and be amazed at how fast everything is:

    Crunchbang Linux [crunchbang.org]

    KMandla's GTK 1.5 Remix [wordpress.com]

    Or, if you want to be more adventurous, get Arch Linux [archlinux.org] and grab a window manager like Openbox or PekWM. If you go that route, take a look at this Openbox guide that'll show you a nice panel to use, file navigator, and generally hold your hand through the process, here [wordpress.com]. But if you want your hand held even more, someone packaged a panel and file navigator and theme chooser and stuff like that together with Openbox already- Called LXDE. You can just grab that too, should be in any repository.

    I do think it's unfortunate for joe sixpack that it's getting a little slower- But for them it's still faster than Vista and XP, right?

    You know what they should make? They compile pretty much everything in the kernel as a module, and then they probe hardware and load the right modules each time you boot... It would be cool to be able to do a "Speed up my computer" boot where it loads the modules, and then compiles a kernel with the modules for the hardware it finds compiled into it. Disable things that it hasn't seen their computer use, etc., and then just still probe the hardware to fall back on another kernel if things have changed.

    OR, how about loading modules when you actually need them..? And this goes for daemons, too. When you go to listen to something, and it returns that there's no module loaded for sound, how about loading the module then, and then starting the alsa daemon. Have you ever looked at the daemon list for Ubuntu? It's huge. I know I don't need all of those- I know because on the distro I'm on now I only run 3 daemons on boot, and everything works fine.

    I don't know. Maybe that's not the solution. But those guys are clever, I'm sure they can come up with something to get rid of the extra daemons and modules running without sacrificing usability. Anyone here have any good ideas..?
  • by dave420 ( 699308 ) on Monday October 27, 2008 @09:59AM (#25526555)
    Apart from XP Embedded, where you can choose which parts of the OS are installed. It's not designed for home-use, but I've used it to make a 140MB version of XP that is blazingly fast. It can play media, has internet access, games, everything I need.
  • by calc ( 1463 ) on Monday October 27, 2008 @10:17AM (#25526829)

    I believe that PIE (position independent executable) along with some other security enhancements were turned on in Ubuntu around the time the slowndowns showed up. This would definitely cause at least some slowdown on the 32bit version since there aren't enough registers to begin with. I'm not sure if it causes any noticeable slowdown on the 64bit version, since the amd64 architecture has a lot more available registers, which would correlate with the person mentioning earlier that the 64bit version seemed fast.

  • by cpghost ( 719344 ) on Monday October 27, 2008 @10:28AM (#25526993) Homepage

    From those benchmarks the one thing that stuck out was that GCC is getting slower.

    That's a well-known fact for us source-based distributions/OS users. Compiling everything from source on Gentoo, or the BSDs took a severe performance hit since GCC got more and more slow (for no apparent reason), esp. the C++ backend... But what's slowing Ubuntu down is probably the quality of ASM code generated by GCC, as well as programs being writting more and more sloppily by developers with very fast machines.

    Maybe the source of the problem is actually GCC getting slower. This forces developers to use faster machines to shorten the compile runs; but those faster machines also hide the problem of software getting slower. Many devs simply don't care anymore for slower machines, because they simply don't see the problem on their own boxes. To them, the software is "fast enough."

  • I'd bet almost all of it is the effect of the scheduler. The benchmarks all show single tasks taking longer, but that's not taking into account multi-process performance. Is the desktop still responsive now even with a high-intensity background task running? I'd take that over the task finishing 5% faster any day.

  • by _Sprocket_ ( 42527 ) on Monday October 27, 2008 @02:37PM (#25531181)

    Indeed. I might have to go install something like XPLite or create my own installation media with nlite/vlite. It's really taxing firing up a GUI and unticking a few boxes.

    Yes, unticking boxes is easy. So's deleting a file. But what happens after the fact?

    Those utilities do look like a step in the right direction. Pity they're from a third party and a complete hack (albiet a very cool looking one). What happens when Microsoft releases a service pack? What happens if you change your mind and want to install a component?

    I know with Ubuntu I'm removing a package using the very same tools provided by the base distro. When I update, I only update whats installed. And I know I can always install / re-install anything missing at a later time.

  • by mlwmohawk ( 801821 ) on Monday October 27, 2008 @03:15PM (#25531713)

    Indeed, but supposedly the rest of the system was mostly idle, it shouldn't have slowed down a CPU-bound calculation by 50%, otherwise it's a scheduler regression.

    The supposition is not part of the facts. My point was there are some benchmark number that should have remained constant but showed variability.

    The benchmark is utterly useless until they can explain the variability in tests that should be constant.

"Stupidity, like virtue, is its own reward" -- William E. Davidsen

Working...