Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software Linux

Measuring The Benefits Of The Gentoo Approach 467

An anonymous reader writes "We're constantly hearing how the source based nature of the Gentoo distro makes better use of your hardware, but no-one seems to have really tested it. What kind of gains are involved over distros which use binary packaging? The article is here."
This discussion has been archived. No new comments can be posted.

Measuring The Benefits Of The Gentoo Approach

Comments Filter:
  • Misses the point (Score:5, Interesting)

    by keesh ( 202812 ) * on Saturday August 02, 2003 @04:33PM (#6596725) Homepage
    The source-based thing isn't even why most people use gentoo. According to a recent poll on the gentoo-user mailing list, most people like it because of Portage (the package management system), with Customisation / Control coming in second (performance was third). Portage rocks. Even with the compiling, it takes less time to install some stuff (eg nmap) than it would take to locate the relevant .rpm. Of course, kde's a different matter, but with distcc compiling doesn't take too long.

    Having said that, it looks like the guys doing the testing got their CFLAGS wrong. Gentoo's performance should never be worse than Mandrake -- I reckon they forgot omit-frame-pointer. Also, the kernel compile is unfair, because gentoo-sources includes a whole load of patches that Mandrake and Debian don't.

    Finally, what's with measuring compile times? How is that a fair way of measuring performance? Hey, look, my distcc + ccache + lots of CPUs system with gcc3.2 can compile stuff faster than your single CPU gcc2 system... It's like comparing chalk and oranges.
    • by ecchi_0 ( 647240 ) <small20NO@SPAMearthlink.net> on Saturday August 02, 2003 @04:40PM (#6596759) Homepage Journal
      Finally, what's with measuring compile times? How is that a fair way of measuring performance? Hey, look, my distcc + ccache + lots of CPUs system with gcc3.2 can compile stuff faster than your single CPU gcc2 system... It's like comparing chalk and oranges

      Except in this case they all had the same hardware on each machine...

      • Define 'same' (Score:5, Insightful)

        by The Monster ( 227884 ) on Saturday August 02, 2003 @08:22PM (#6597667) Homepage
        Except in this case they all had the same hardware on each machine...
        That is frankly impossible. Even though the machines were supposed to be identical:
        Upon testing with hdparm, it was apparent that this machine was having troubles setting above udma2. Eventually this problem was traced to the HD cable, a salutary lesson in the
        variability of identical hardware setups.
        This is just the difference they caught. Who knows how many other subltle variations exist between nominally identical machines? An honest attempt to determine how fast 3 distros do the same thing would be to really use the same hardware, by running the tests on one or more machines with one distro, then wiping the HDs and installing the second and repeating the tests, then on to the third.

        The only way to have the same hardware is to use the same machine for each distro. Period.

        • Re:Define 'same' (Score:5, Informative)

          by VPN3000 ( 561717 ) on Sunday August 03, 2003 @05:30AM (#6599114)
          Agreed. Back when I was a hardware tech in the early 90's, I recall building pools of identical machines for customer orders. For burn-in, I would loop benchmarks for 24 hours before shipping them out. There was typically 1-2% difference in identical systems.

          People tend to forget the complexity of a PC and the inevitable, microscopic differences each part made. Thus differences in resistance, heat generated, and performance.

    • Re:Misses the point (Score:5, Interesting)

      by arkanes ( 521690 ) <arkanes.gmail@com> on Saturday August 02, 2003 @04:42PM (#6596769) Homepage
      The key points to recognize from the article are: a) GNUmeric's performance sucks (8 minutes to open a file? I won't even think about the other version...) and b) that the CPU is not a signifigant bottleneck in modern systems. We all knew that. It's one reason why so many people are happy with binary packages, because the speed increase from saving some cycles generally isn't worth the extra time you lose compiling (as seen, in many cases it makes 0 difference).

      I would have liked to see some tests with things that are more CPU than IO bound, but, realistically, how often do you do those things in the normal case?

      If the main reason is to use portage for the convenience (same reason many people use debian), maybe they need to expand portage to support binary packages.

      • Re:Misses the point (Score:5, Informative)

        by countvlad ( 666933 ) on Saturday August 02, 2003 @04:50PM (#6596818)
        Portage can be used to install binary (precompiled tbz2 packages of ebuilds).

        From emerge --help:

        --usepkg (-k short option)
        Tell emerge to use binary packages (from $PKGDIR) if they are available, thus possibly avoiding some time-consuming compiles.This option is useful for CD installs; you can export PKGDIR=/mnt/cdrom/packages and then use this option to have emerge "pull" binary packages from the CD in order to satisfy dependencies.

        --usepkgonly (-K short option)
        Like --usepkg above, except this only allows the use of binary packages, and it will abort the emerge if the package is not available at the time of dependency calculation.

        You can also, of course, emerge rpm and install any RPM packages. I'm not sure about debian .deb packages or slackware .tgz packages.

        Gentoo is also accept pre-orders for it's upcoming 1.4 release. Information can be found here, at the Gentoo Store. [gentoo.org]
        They even have precompiled packages optimizaed for Athlon-XP's - drool! [gentoo.org]

      • Re:Misses the point (Score:5, Informative)

        by Sancho ( 17056 ) on Saturday August 02, 2003 @05:33PM (#6596996) Homepage
        One of my favorite uses for Gentoo is optimizing for size rather than execution speed. As you say, the CPU is rarely the bottleneck these days, but loading files from the disk can be a factor in a program starting up. I've done the benchmarks, and some rather large programs see significantly reduced load times when optimized with -Os.
        • Re:Misses the point (Score:4, Interesting)

          by realdpk ( 116490 ) on Saturday August 02, 2003 @07:17PM (#6597431) Homepage Journal
          Try -static - on FreeBSD it can improve performance for very fast running binaries significantly (such as ones designed to run 100s of times/s). I dunno about on Linux (although I've noticed that Linux seems to prefer dynamic binaries for everything.)
          • So is linux (Score:3, Interesting)

            by r6144 ( 544027 )
            Especially when you link to a lot of libraries, dynamic linking can often take a lot of milliseconds. Prelinking helps a bit, but static linking is the fastest. On my machine running a simplest program takes 5ms when dynamically linked, 3ms when statically linked, "user time" is 1.1ms vs. 0.4ms.
            • Re:So is linux (Score:3, Informative)

              by pantherace ( 165052 )
              prelink can help with this. google for it and you should find out about it or look on gentoo.org under the docs section. Prelink support is built into portage :)
      • Re:Misses the point (Score:5, Interesting)

        by dougmc ( 70836 ) <dougmc+slashdot@frenzied.us> on Saturday August 02, 2003 @05:39PM (#6597028) Homepage
        the CPU is not a signifigant bottleneck in modern systems.
        Are you on crack? Even today, the CPU speed is a signifigant bottleneck for many operations.

        Now, the cpu speed has increased by a larger factor than memory speed and disk speed over the last few years, but it's still quite a large bottleneck in many operations, including the ones tested in this (admitedly lacking) test.

        Even the speed of a kernel compile, which is often given as a classic `disk I/O bound' process, is extremely CPU bound. How do I know? Running `top' on an idle box shows 0% cpu utilization. Once I start the compile, it goes to well over 90% cpu utilization and stays there until the compilation is done. (just to be complete, I'm testing this on a dual p3 700 box with SCSI disks, doing a `make -j2'. But even my 2ghz Athlon computer with IDE disks works similarly.)

        Perhaps I'll do some tests with adjusting the CPU multiplier on a given box, see how that affects compilation times. That would be an excellent test ...

        • Re:Misses the point (Score:4, Informative)

          by gibber ( 27674 ) on Saturday August 02, 2003 @06:21PM (#6597204) Homepage
          Changing the CPU multiplier will not give you what you are looking for as it will likely change your FSB speed and L1/L2 cache access rates. The most common bottlenecks on systems are heirarchical IO bandwidth related. For example, having scads of RAM for buffer/cache will help you with disk IO woes.

          Compiling binaries with optimization for a particular processor help with i-cache and d-cache utilization. The fewer instructions fetched (or the order in which they are fetched) makes a big difference in performance.

          Boosting CPU cache size (up to practical cache limits), increasing FSB speed and avoiding disk IO are much more significant than CPU M/GHz.

          Compilation, especially optimized (-0X) compilation is _VERY_ CPU intense. If you have enough RAM to avoid the disk thrashing caused by writing numerous intermediate files you will peg your CPU.

          Most user activities (aside from games) on computers are not bottlenecked by CPU but by various heirarchical IO constraints and hence the previous poster was correct that the CPU is not a significant bottleneck on modern systems.
          • by Deadplant ( 212273 )
            CPUs still aren't fast enough for me.
            maybe I'm not representative of 'most users'

            at work I do video processing and at home i play games and encode DVDs to mpeg4.
            Even 2.4ghz cpus take hours to encode entire movies... I can get a little better than realtime encoding to mpeg4, but when you add two-pass, and the fact that the videos are so damn long... I wish i could get a terahertz CPU...

            you try running a 2 pass encode on a 6 hour 720x480 DV video file and then tell me your CPU is fast enough.

            it always make
            • Re:Misses the point (Score:3, Informative)

              by Arker ( 91948 )

              Seriously though, doubling the access speed of your RAM is likely to do more good on that sort of task than doubling the CPU speed.

              • Seriously though, its not. MPEG4 encoding is SERIOUSLY cpu dependant, not bandwidth. Just look at the benchmarks of the opteron vs Pentium4 on 333MHz memory bus and you will see what I mean (in mpeg4 encoding). i'm not saying the opteron is inferior for mpeg4, but the fact that MPEG4 (at least on windows) processing is highly SSE2 optimized for the P4.

                (hint: P4 beats the opteron, even though the opteron has a higher real world memory bandwidth and less memory latency and larger cache)

                Yea, memory bandwidth
      • Re:Misses the point (Score:5, Informative)

        by nagora ( 177841 ) on Saturday August 02, 2003 @05:56PM (#6597089)
        but, realistically, how often do you do those things in the normal case?

        You obviously don't use GIMP or analyse OS mapping data much. I have the RAM to get the info into memory, IO is not an issue for much of my work.

        Having said that, portage is the main reason I've converted all my machines to Gentoo; it's just not a serious option to go back to RPM based systems after using it for a week or so.

        TWW

        • by arkanes ( 521690 ) <arkanes.gmail@com> on Saturday August 02, 2003 @06:23PM (#6597214) Homepage
          You're right, I don't, and neither does anyone else. Even in your case, memory IO is likely to be as much of a bottleneck as the CPU, if not more. One of the reasons the new Mac walks over PCs in Photoshop benchmarks is massive memory bandwidth.
          • "One of the reasons the new Mac walks over PCs in Photoshop benchmarks is massive memory bandwidth."

            Actually,
            No.

            Canterwood systems have a Pentium 4 with the 800 megahertz FSB and Dual DDR 400. Dual DDR 400 happens to have a total theoretical bandwidth of 6.4 Gigabytes a second, exactly the same as the P4's front-side bus. Plus, the busses are running synchronously so latency is lower.

            The Apple G5 has a 1Ghz FSB and Dual DDR 400. Hmmm... that extra 200 mhz of FSB doesn't really do much, does it? It's stil
      • Re:Misses the point (Score:3, Informative)

        by Arker ( 91948 )

        Actually I think the key sentence of the article was this:

        The Gentoo setup by Bill Kenworthy was compiled using the "stock" kernel source and the "-march=pentium3 -pipe -O3" compile flags.

        Doh! No wonder it sucked.

        O3 turns on things like inlining that are only worthwhile in certain circumstances, but are often counterproductive. So the results aren't surprising in the least.

    • Re:Misses the point (Score:3, Interesting)

      by tweek ( 18111 )
      Well in all fairness, ccache only does any good after the first compile. The distcc option however does make a difference.

      I will agree that the biggest thing for me with gentoo is actually being able to strip stuff out of an install with a simple USE flag. I actually prefere to build things myself but having a package management system that takes care of dependencies for that is a godsend.
    • by aboyce ( 444334 )
      first, I'd love to see a distro be faster than "up2date package_name" or even "aptget package_name".

      Next, they said right in the article that they used an identical copy of the kernel source on each machine, so patches shouldn't make a difference.

      Finally, its not that I dont agree with you, their tests did have flaws, it just seems that some of your facts are wrong in attacking them. There are some points that need to be examined, even if some of their conclusions are premature.
      • Re:Misses the point (Score:5, Interesting)

        by cperciva ( 102828 ) on Saturday August 02, 2003 @05:20PM (#6596933) Homepage
        first, I'd love to see a distro be faster than "up2date package_name" or even "aptget package_name".

        FreeBSD Update [daemonology.net]. Ok, it only upgrades the base FreeBSD install, starting at binary releases, along the security branches; but it uses binary patches [daemonology.net] to dramatically cut down on the bandwidth usage (and therefore the time used). A typical install of FreeBSD 4.7-RELEASE (released in October 2002) has 97 files totalling 36MB bytes which need to be updated for security reasons; FreeBSD Update does this while using under 1.6MB of bandwidth.
        • by DrXym ( 126579 )
          And it's binary patches which make it such a pain in the arse to keep Linux up to date. Really, everytime someone finds an exploit in glibc or the kernel which involves a few line patch, everyone is expected to download a 20Mb package!

          That's fine if you happen to be broadband but sucks if you don't. In an ideal world, everyone would be motivated to waste the hours to grab the update, but how many bother, especially with the workings of Linux becoming more opaque and the users less knowledgable? The conseq

    • by scotch ( 102596 ) on Saturday August 02, 2003 @04:47PM (#6596805) Homepage
      Why the hell would you introduce a distributed computing tool in a discussion about evaluating the performance of a single machine/OS? Pure obfuscation. Typical gentoo-missing-the-point behavior. Either compiling the kernel is a fair measure of the speed of the system or it isn't. distcc doesn't play into it. Here's a fun analogy. You want to see which is faster a porche 911 or a chevy corvette. So some thoughtful guys put together a series of tests, one of which is a 1000 mile race. Then along comes user keesh (202812) who says "Bad test, I wouldn't drive 1000 miles, I would take the train."

      A hearty helping of wtf is in order. Some of your other points are ok, though ;).

    • Re:Misses the point (Score:5, Interesting)

      by antiMStroll ( 664213 ) on Saturday August 02, 2003 @05:08PM (#6596879)
      What's missing in the article is the second half of Gentoo's compile options, the /etc/make.conf USE variables. CFLAGS determines CPU architecture, USE adds or removes the options for extra software support. In stock form Gentoo compiles binaries with a huge number off add-ons, including support for KDE, Gnome, framebuffer, etc. From make.conf: " USE options are inherited from /etc/make.profile/make.defaults." The list from a current Gentoo 1.2 looks like:

      USE="x86 oss 3dnow apm arts avi berkdb crypt cups encode gdbm gif gpm gtk imlib java jpeg kde libg++ libwww mikmod mmx motif mpeg ncurses nls oggvorbis opengl pam pdflib png python qt quicktime readline sdl slang spell ssl svga tcpd truetype X xml2 xmms xv"

      Without knowing what support Debian or Mandrake used to compile binaries, this is still an apple/oranges comparison. My notebook isn't configured to compile with KDE or Gnome extensions because the hardare is too old and I use Fluxbox. Mandrake and Debian may still turn out faster (the Gentoo Mozilla e-build was legendary for being slow), but that's not quiet yet proven.

    • Of course if the do a build, kill it half way through then build it again with out make clean it will go even quicker. Not saying they did but stats can always be taken two ways

      Rus
    • by 0x0d0a ( 568518 ) on Saturday August 02, 2003 @05:17PM (#6596922) Journal
      Having said that, it looks like the guys doing the testing got their CFLAGS wrong. Gentoo's performance should never be worse than Mandrake -- I reckon they forgot omit-frame-pointer.

      Omit-frame-pointer is not a regular optimization. Working without stack traces to hand to a developer if you have a problem isn't really a reasonable optimization unless you're doing something like an embedded system, where you couldn't get at the stack trace anyway.

      This is *exactly* what the real tech-heads have been saying for years, what my tests confirm, etc. A minor change in a couple of compile flags above -O2 almost *always* makes very little difference. Compiling your own packages really just plain doesn't matter. Maybe if gcc really was incredibly tuned to each processor, but certainly not with the current compiler set.

      Also, the kernel compile is unfair, because gentoo-sources includes a whole load of patches that Mandrake and Debian don't.

      And perhaps the inverse is true, too?

      Look, the point is, Gentoo is not significantly faster than any other general distro out there. If you use it, it's because you like their tools or packaging scheme. You aren't cleverly squeezing out more performance.

      Oh, and last of all, I've seen compiler folks saying that it's not that unusual for -O3 to perform worse than -O2. When I was taking our cache performance analysis bit in university, cache hits and misses really *is* the dominant factor in almost all cases. Loop unrolling and function inlining can be a serious loss.

      Finally, compiling for different architectures generally makes very little difference on any platform other than compiling for i586 on a Pentium. The Pentium runs 386 code rather slowly. The PII and above will happily deal with 386 code.
      • And pentium 4 (Score:3, Insightful)

        by r6144 ( 544027 )
        I have heard that P4 runs code compiled for i386/P2/P3 rather slowly compared to code compiled for it. For example, P4 runs traditional (stack-based) FPU instructions rather slowly, while it prefers SSE/SSE2-based instructions. Therefore on P4 systems compiling with the right options may well give a significant speed boost.

        However, AFAIK PPro/P2/P3/Athlon runs these "legacy" code quite well, so relatively little gain can come from compiler option tweakings.

    • Re:Misses the point (Score:5, Interesting)

      by buchanmilne ( 258619 ) on Saturday August 02, 2003 @05:21PM (#6596942) Homepage
      Portage rocks.

      If you have a fast processor. My Duron 800 can keep itself busy for a weekend compiling OpenOffice.org ...

      Even with the compiling, it takes less time to install some stuff (eg nmap) than it would take to locate the relevant .rpm.

      This is on my Thinkpad 600X, which is a 500 PIII/192MB, with a pretty slow disk:

      [root@bgmilne-thinkpad mnt]# rpm -q nmap
      package nmap is not installed
      [root@bgmilne-thinkpad mnt]# time urpmi nmap
      installing /var/cache/urpmi/rpms/nmap-3.00-2mdk.i586.rpm

      Preparing...
      #some hashes replaced to fool the lameness filter#
      1:nmap
      #some hashes replaced to fool the lameness filter#
      5.34user 1.36system 0:26.76elapsed 25%CPU (0avgtext+0avgdata 0maxresident)k
      0inputs+0outputs (1712major+8394minor)pagefaults 0swaps


      You would need quite a system to beat 26s I think.

      Also, the kernel compile is unfair, because gentoo-sources includes a whole load of patches that Mandrake and Debian don't.

      From the article:

      "The same 2.4.21 source was copied to all machines and compiled using the same options. However, it should be noted that the Debian system used gcc 3.3.1 whilst the Mandrake and Gentoo installations used gcc 3.3.2 ."

      I don't see the point of:
      -not using the default compiler on the system
      -if you don't use the default compiler on each machine, at least use the same compiler across them all

      But, otherwise, the comparison looks pretty fair.
    • by Vellmont ( 569020 ) on Saturday August 02, 2003 @05:27PM (#6596963) Homepage
      I think you're probbably right about the reasons for using Gentoo (though I admit I've never actually used Gentoo).

      However, I think that a kernel compile _is_ a fair measure of overall system performance. It involves lots of disk, memory, and processor access, so it's a decent indicator of across the board performance.

      As far as kernel compile versions go, from the article:
      The same 2.4.21 source was copied to all machines and compiled using the same options. However, it should be noted that the Debian system used gcc 3.3.1 whilst the Mandrake and Gentoo installations used gcc 3.3.2 .

      So the kernel source was the same, not Gentoo source.

      You say the performance problems are because they got the CFLAGS wrong. If this is the case it only seems to underscore how easy it is to screw up optimizations with Gentoo. It's great for people that know all the proper optimizations for a particular piece of hardware, but I think the majority of people just don't know this offhand.

      In any case I find it very interesting the big differences you can see in performance between distributions on the same hardware (and I'm assuming similar kernel versions).
    • Re:Misses the point (Score:3, Interesting)

      by be-fan ( 61476 )
      I second that. I don't use Gentoo because I think I can get a minscule 0.5% extra performance by compiling myself, but because of Portage, and the Gentoo community. Portage is awesome, and the source-based nature means that ebuilds come out extremely quickly, and are less subject to distro-specific "customizations" (read: quirks) than binary packages. All the ebuilds BreakMyGentoo.net, as well as the ebuilds posted to the bug tracker are a phenomenal example of how the power of portage allows a relatively s
    • it looks like the guys doing the testing got their CFLAGS wrong. Gentoo's performance should never be worse than Mandrake

      Makes you wonder how many Gentoo users actually get their compiler flags right, doesn't it?

  • I have to wonder what the reviewers consider to be a "default" install. For example, did the reviewers remember to build in support for their IDE controller (if that's what they use)? If so, is DMA enabled for the Gentoo box, and is it for the others? What kernel did they use? Did they use gentoo-sources or did they use another?

    Maybe to the uninitiated this seems informative, but to me it doesn't.

    • by Enahs ( 1606 )
      Apparently DMA is enabled on all the machines. Missed that on the first read. Also see my other comment on the article: apparently they ran into differences in the machines, despite the fact that they were supposed to be identical.



      Despite missing an obvious point, I still stand by my original sentiment: this article isn't very informative at all.

  • Slow? (Score:5, Interesting)

    by peripatetic_bum ( 211859 ) on Saturday August 02, 2003 @04:38PM (#6596752) Homepage Journal
    Can this be correct. Debian turns out to he fastest?

    Anyway, I like the idea of gentoo, and I saw I a lot of Debian users head over to gentoo because the idea of controlling everything including the build was nice, however, I saw the gentoo idea also pretty much die, since a log of these users are power desktop users and not everyone could wait 3 days for X to build.

    What I like about debians packages is that if you do make a mistake you can always pretty much correct the package by fixing your souce list or goingt o packages.debian.org and getting the older working package and installing it manuaall with a simple dpkg -i old_package.deb.

    In gentoo, you had to rbuild to the whole thing, whihc with x coud take forever. And so what I saw gentoo suddenly doing was having a lot of pre-complied binaries start being provided by gentoo because they saw the problem with building taking forever, and so it kind of killed the whole idea of building for yerself, in which case, if you are going to stick with built pacakged why not have them maintained by some of the best developers around (ie debian)

    The othjer thing I noticed is that a lot of developers of software acutally use debian. I've noticed many a time that some cool software wa being made and the developers wouls provide source and they would provide a debian package and nothting else. Ie Debian appears to be the preferred developer's distro. In this I would like to hear discussion,.

    Thansk all
    • What are you people running on? My compiles take about 3 days for X sure, but I am using a 500mhz system? WTF is up... ah well Debian rocks.

    • by Enahs ( 1606 ) on Saturday August 02, 2003 @04:48PM (#6596811) Journal
      For me, Gentoo is a great choice partially because I like the control and partly because I use crufty hardware that doesn't fall into any predefined (read: Intel) category.

      Try using binaries compiled for an i686 on a Via C3-1G, for example.

      Yes, if your entire reason for using Gentoo is to have control over how apps are built, starting from stage3 pretty much defeats the purpose, and yes, if you don't know what you're doing, then rebuilding X can be a real drag. However, I have to say that I appreciate the fact that Gentoo manages to avoid a lot of legal issues by having the user build the packages her/himself. Honestly, I'd love to be Ogg Vorbis-only for music on my computer, but when I own a portable MP3 player, an MP3-capable DVD player, an in-dash MP3 player, and use OS X at work where QuickTime Ogg Vorbis support is dodgy at best, I want lame. And I want lame support built into kdelibs or whatever lame support needs to be built into so that I can drag-and-drop 192kbps ABR MP3s from an audiocd:// ioslave window to my mp3 folder. ;-D

      My own experience has been that Gentoo outperforms Debian on my hardware, but only after I've done some tweaking on Gentoo. YMMV.

      • My own experience has been that Gentoo outperforms Debian on my hardware, but only after I've done some tweaking on Gentoo. YMMV.

        How true, I wish we had a 3dmark type program for Linux, where we could test X performance in 2d/3d, audio, hd, cpu, mem and even latency for each area. Maybe even report performance and features for OpenGL, to see what the drivers do and dont support.

        A good benchmark program could be used to see if newer kernels are really faster (ie 2.6) or even those nice pre-emptive kernel
    • Re:Slow? (Score:3, Insightful)

      by antiMStroll ( 664213 )
      ....these users are power desktop users and not everyone could wait 3 days for X to build.

      Desktop power users on 386's are a rare breed nowadays. I don't know how long it took X to build on my P2 366 w/ 192 meg RAM because I started it before going to bed and it was done in the morning. Maybe these power users should consider hardware before choosing distros.

      And so what I saw gentoo suddenly doing was having a lot of pre-complied binaries ....

      Lots? OpenOffice has a precompiled option, Opera does because

      • Yeah...my PII/266 definitely takes less than 24 hrs to build X.

        That being said, I'm dubious that blowing the time on compiling your ftp server with all optimizations every time you download a new version really is a worthwhile use of time and effort.

        Maybe xmame. Maybe glibc. Maybe the kernel. That's about it. Definitely not 99% of the software on the system.

        I don't really think any one distro is much better than the others for development. I happen to use Red Hat, which I do plenty of development on
    • Re:Slow? (Score:3, Interesting)

      by ctr2sprt ( 574731 )
      Yeah, I was a Debian user and tried Gentoo when it first came out. I'd used FreeBSD, so I knew and loved the ports library, so I was excited about Gentoo. Unfortunately, the initial releases seemed broken in several severe ways. Half the software in Portage wouldn't compile at all, and I didn't really feel like digging into the source to find out why. I'm not some idiot newbie, I'm a computer programmer and have been using Unix for nearly 10 years. I just wanted the install to work, and it wouldn't, no
  • FreeBSD's ports (Score:4, Interesting)

    by Anonymous Coward on Saturday August 02, 2003 @04:39PM (#6596758)
    I don't use Gentoo (When I use Linux, I use Slackware), but I do use FreeBSD and its ports collection.

    Purported performance gains are one thing source packages give you (although I don't enable super optimizations because you never know when gcc bugs with -march=pentium4 -O3 or whatever will bite you).

    There are two major reasons I like installing from source, though. One is that you can customize the build to your system; lots of software packages have various compile time options, and when I have the source I can choose exactly how it's going to be built.

    Another thing is that when you install from source, you can hack the program to your heart's content. On my desktop box there are around 15 programs that I have to modify to get to act like I want (from simple things like getting cdparanoia to bomb immediately when it detects a scratch to halfway complex things like rewriting parts of klipper and XScreenSaver, which now picks a random screen saver on MMB and lets me scroll through all screensavers with the wheel =).

    I don't modify stuff on my servers, but I still get to choose exactly how things are built, which I very much enjoy.
  • by BrookHarty ( 9119 ) on Saturday August 02, 2003 @04:41PM (#6596768) Journal
    While the posts are starting, and people are saying Mandrake would never be faster. Lets go back earlier this year...

    Remember the KDE optimizations that where not included in the Gentoo source release? Everyone was wondering why KDE was faster on Mandrake. There where talk for over 2 months before people realized it was an option Mandrake was compiling with.

    Myself, Gentoo's biggest feature was the kernal compile options, adding patches for pre-emptive mulitasking, and improved responsiveness. I noticed the improvements on all my machines, but the compile times where a draw back. And sometimes the applications wouldnt compile.

    Mandrake while my favorite choice, doesnt include the best pre-emptive kernels. Which do make a noticable difference. So after installing mandrake, and putting a newer kernel on the system normally takes care of that.

    I'm just waiting till beta2 of mandrake cooker 9.2 with the 2.6 kernels, that should make Gentoo and Mandrake on par for speed.
    • Myself, Gentoo's biggest feature was the kernal compile options, adding patches for pre-emptive mulitasking, and improved responsiveness.

      Really? Hmm.. I run Gentoo and I use vanilla kernels because I find they perform better. I'm on an SMP box though, so that might have something to do with it. But I tried a couple Gentoo kernels and I had *seriously* bad performance problems. Whenever I was compiling the mouse cursor would get all jittery, as would the scrolling song title in XMMS - even if I niced the

    • It's not part of the main distro, but there is a kernel-multimedia-2.4.21.0.16mdk-1-1mdk.i586.rpm in Mandrake contribs. Check it out if you want a more responsive kernel.
      • It's not part of the main distro, but there is a kernel-multimedia-2.4.21.0.16mdk-1-1mdk.i586.rpm in Mandrake contribs. Check it out if you want a more responsive kernel.

        Wow, thank you, didnt know about that kernel, looks like it has the patches I was talking about. Did a quick lookup [pbone.net] on pbone and found the info on it.

        This kernel includes patches useful for multmedia purposes like: preemption, low-latency and the ability for processes to transfer their capabilities. The preemtion patches allow a task to

    • Mandrake while my favorite choice, doesnt include the best pre-emptive kernels.

      You mean like this one (from contrib for 9.1)?


      Name : kernel-multimedia-2.4.21.0.16mdk
      Group : System/Kernel and hardware Source RPM: kernel-multimedia-2.4.21.0.16mdk-1-1mdk.src.rpm
      L icense: GPL
      Packager : Danny Tholen
      URL : http://www.kernel.org/
      Summary : A preemptible Linux kernel, which reduces the latency of the kernel.
      Description :
      This kernel includes patches useful for multmedia purposes lik
  • by shoppa ( 464619 ) on Saturday August 02, 2003 @04:42PM (#6596770)
    I don't use Gentoo, but I do use Linux From Scratch [linuxfromscratch.org], and I do see substantial improvements with command-line type activities: A kernel build on a Athlon is about 20 percent faster when I do it with a custom LFS build vs a stock RedHat installation.

    Most of the comparisons in the article were for X-related graphics applications, and while they were comparing the versions of the applications, they were not comparing the libraries underneath them (glibc, X11, and probably the window manager too come into play) and they should've compared versions there too. It becomes complicated because for a typical X11-based app there are probably several dozen libraries involved (in addition to all the configure-time options for them...)

  • Why use it? (Score:3, Interesting)

    by Realistic_Dragon ( 655151 ) on Saturday August 02, 2003 @04:42PM (#6596771) Homepage
    I picked Gentoo because it was Free and free, and because emerge has IME one big advantage over APT - one well updated, consistant, all encompasing, repositry.

    OTOH my laptop runs RedHat, because I needed at least one machine running it to stay current with where they dump configs (it's the distro they use at work). Coupled with Apt-RPM it's competent enough, and I have no major problems with the performance.

    So yeah, I have to agree with the article - you may like it one way, others may want to do theit own thing. No matter what you chose, you (probably) have binary compatibility, so who gives a sh!t about the holy wars, just as long as you aren't running Windows :D
  • I have never tried Gentoo but I ran FreeBSD for a while. With FreeBSD you have source for the whole system as well as for any "ports" you install. There are procedures for doing a "make world" that recompiles all of it. You can get the source changes to go to the next version and with a bit of chicken and egg stuff about compilers if that has changed, you can compile yourself the upgrade.

    I ended up bagging it because there is a fair amount of stuff for Linux that is missing in BSDs (or I wasn't willing to
  • by lavalyn ( 649886 ) on Saturday August 02, 2003 @04:44PM (#6596781) Homepage Journal
    Besides, before doing any comparisons on Debian vs. Gentoo they should have compared Gentoo vs. Gentoo on different optimizations. Like using -O2, -Osize, -mfp-math=sse. Comparing video drivers. Trying different filesystem types. And a whole gaggle of other configurables at compile-time.

    You'd be yelling bloody murder if Microsoft sponsored a study without doing this sort of research before pitting Windows vs. Linux.
    • by Arker ( 91948 ) on Saturday August 02, 2003 @08:12PM (#6597632) Homepage

      The odd thing is that, from what I've read, a lot of Gentoo folk seem to be trying to compile everything with -O3. This is, frankly, bloody stupid. This turns on a lot of 'optimisations' that are only useful on a few programs and actually harmful for most, and is probably one of the reasons it looked bad in this test.

      O1 is the safe level of optimisation. Even O2 runs the risk of doing more harm than good, although it's a fairly low risk. O3 runs a very high risk of doing more harm than good. In many cases Osize is probably the best option anyway, because I/O is more commonly the bottleneck than cpu capability.

      And the processor optimisations can also be risky. Every processor out there is designed to run commercial i386 code as fast as possible anyway.

      The big wins in compiling yourself are control of configure options, not compiler optimisations. I think source-based distributions are a great idea, but I have to wonder if most people using them right now are getting the benefits.

  • by JanneM ( 7445 ) on Saturday August 02, 2003 @04:45PM (#6596789) Homepage
    There are a lot of issues one can bring up with the test - not identical versions of various software; different X drivers, one distro will have patches missing in the others and so on. Clearly, that greatly influences the results.

    And that is a good point to take home. Optimizing compiles is _not_ the panacea for speed and responsiveness that - a minority, I believe - of source-based distros tend to bring up. There are so many other factors intimately involved in it that any benefits are generally lost in the noise.

    For some specific components, it can be a good idea - but for those, most distros tend to ship several optimized versions that the installer chooses between at installation time.

    Another domain that benefits are specialized, compute-intensive applications; things like simulators or other technical stuff. But then, those apps are generally tweaked and compiled by their users no matter what distro is used anyway.
  • by grotgrot ( 451123 ) on Saturday August 02, 2003 @04:46PM (#6596800)
    I tried Gentoo for a while and eventually gave up. The problem is that you still have dependency hell. Most packages look for stuff at compile time, and many have optional components. For example a video player may not include support for QuickTime unless the libraries are already on there at compile time.

    So the fun starts when you start installing stuff, they don't include support for other components because they weren't there at compile time, you then discover the missing support, have to install the missing libraries and then recompile every package.

    This is an especially big issue with multi-media stuff, and gets many layers deep as some libraries have optional components depending on other optional components.

    About the only way to guarantee a fully uptodate system is to keep doing complete recompiles of the entire system until there are no changes.
    • by GweeDo ( 127172 ) on Saturday August 02, 2003 @05:09PM (#6596886) Homepage
      Looks like someone failed to set their USE flag properly. If you have it set right you will get support for all you want. Or if you do "emerge -vp packagname" before doing an actual emerge you can see what optional flags aren't getting used. People that use Gentoo but don't read the portage/emerge/use documents are asking for this. Gentoo isn't for all, it is only for the willing.

      Please go here [gentoo.org] and reas as much as possible for installing Gentoo so you don't do something stupid.
  • I dual-booted Debian and Gentoo thinking I would migrate completely to Gentoo for desktop use and Debian for servers. Galeon on Debian was way faster. In the end, I got fed up of compiling and re-compiling X and stuff trying various gcc switches. Debian is fast enough to make sitting about wiating for stuff to comlile a waste of time. And apt-get is every bit as good as emerge.
  • One more thing (Score:5, Interesting)

    by Enahs ( 1606 ) on Saturday August 02, 2003 @04:55PM (#6596833) Journal
    From the article:

    Upon testing with hdparm, it was apparent that this machine was having troubles setting above udma2. Eventually this problem was traced to the HD cable, a salutary lesson in the variability of identical hardware setups.

    Very telling pair of sentences.

  • Unfair test (Score:5, Insightful)

    by periscope ( 20296 ) on Saturday August 02, 2003 @04:57PM (#6596837) Homepage
    There seems to be little attention given to the fundamental unfairness of this test presented.

    The distributions were running with different software versions initially and although this was corrected there seems to have been little consideration given to the minor tweaks given to each different installation used. Which services were running on each system? Were the kernel settings identical in use? Were the machines experiencing differences in performance due to the X setup causing X to add different loads?

    etc.

    Fundamentally this test was probably not complete enough to suggest anything in particular. Perhaps it would have been better to boot a single machine three times and perform the sequence of events exactly the same each time as this would have also ruled out some other potential factors.

    Jon.
  • I like:
    1. The very large collection of packages. For example, I don't believe you can apt-get install vmware.
    2. The very sensible defaults. After emerge wine, I could run Lotus Notes 5 with no changes at all. I tried to set that up on a RH box with wine compiled from source, and it chucked up loads of errors and didn't work.

    That said, it does take a long time to set up from stage 1. You're probably looking at about 3-4 days for the base system, X, KDE, Mozilla, and OpenOffice. I'll use it for my personal

    • VMWare on Debian is best accomplished with alien to convert VMWare's RPM to a .deb, and installing this via dpkg.

      As for sane defaults -- Debian tends strongly toward this in my experience.

  • by GweeDo ( 127172 ) on Saturday August 02, 2003 @05:03PM (#6596865) Homepage
    I have been using Gentoo for months now and will never turn back. Little of this has to do with performance and 99.9% of has to do with Portage. Package management, dependency checking and the lot are SO great. Secondly is where performance comes in. Without proper CFLAGS you might as well ignore this. On my Athlon XP 2800 I have this:

    CFLAGS="-march=athlon-xp -O3 -pipe -fomit-frame-pointer -mmmx -msse -m3dnow"

    In some simple tests I have done I have seen this as worth while. I have two pages I have created that might be worth a read:

    CFLAGS Guide [grebowiec.net]
    Is -mmmx and such worth it? [grebowiec.net]

    Hope you enjoy these reads.
    • I second that. I'm running Gentoo on all of my datacenter servers. I'm not as concerned about performance as I am the ability to preserve the operating envirnoment of the machines between OS upgrades.

      It takes months to get a mail server properly tweaked, or the delicate Apache installation operational. It really sucks to sweat bullets between living with a root-exploit, trying to re-synthesize RedHat's configuration from source, or praying that everything still works after doing an OS upgrade in place.

      T

    • Why do you disable SSE, MMX and 3DNow support? That seems counter-intuitive.
    • by Fnord ( 1756 ) <joe@sadusk.com> on Saturday August 02, 2003 @06:46PM (#6597302) Homepage
      Your "Is -mmmmx and such worth it?" guide is a little unfair. Thing is, you notice that only -O3 really made much of a difference. Well, that's because each of your tests is just one big loop and -O3 does heavy loop unrolling. You basically chose the absolute optimal case for -O3 to win (besides possibly having a small function call inside that loop so that -O3 could inline it). And you didn't do any floating point multiplies or divides which is where -mmmx+sse and -m3dnow would help you.

      If anything you were also using a relatively small dataset. If you get a large enough data set (or code size) -O3 might actually hurt you (loop unrolling and function inlining will bloat both code and data size and make it much more likely to have a cache miss).

      Anyways, synthetic benchmarks are one thing but your is so synthetic as to be rediculous.
  • From the article: The Gentoo install suffered a couple of false starts due to a typo using grub and OpenOffice was still being compiled the night before the test. 11 hours later the OpenOffice compile was still going and we thus had to regretfully abandon that portion of the test.

    So when does the time taken to compile the app with extra optimizations exceed the time you save on tasks performed in that app? Of course, that's only if an optimized build is faster, which in this tests did not appear to be th
  • by tempest303 ( 259600 ) <<jensknutson> <at> <yahoo.com>> on Saturday August 02, 2003 @05:12PM (#6596897) Homepage
    Here you go, the obligatory Gentoo Zealot Translate-o-matic [upevil.net] reference!

    Enjoy!
  • Does it seem strange to anyone else that in the linked photo gallery [linmagau.org], the only picture with a female [linmagau.org] has been viewed 3 times more than any of the other pictures?

    Sheesh.

    I guess there's just not much scenery to show off at distro day.

  • might be seen when AMD finally launches the Athlon64. Compiling everything to x86-64, and thereby increasing the numbers of registers substantially, as well as migrating to a slightly cleaner ISA (sorry x86-32 lovers) should render better results than attempting to optimize for a Celeron (ewww!).

    On a less serious note, the server seems to be having a little bit for trouble; maybe they attempted to install Gentoo on their server (j/k).
  • Since I do my compiling while at work and during when I sleep. I also use distcc. I also like how there is no dependency hell, and I know whats in my Gentoo system, with no programs I don't use.
  • Catch 22 (Score:5, Insightful)

    by Alethes ( 533985 ) on Saturday August 02, 2003 @05:18PM (#6596924)
    It seems that the people that would benefit the most from a source-based distro and optimizing binaries specifically for their hardware are the ones with the slow hardware that will take too much time to get everything installed for it to be a worthwhile investment of time.
  • Celeron 2 GHz Processor 256 MB DDR RAM SAMSUNG - SP4002H 40G HD MSI 6533E main board All SIS chipset

    I think the performance benefits are related to compiling for a specific arch say P4 over how standard distros package at i386 for compatability reasons. This test would be much more interesting if it was done with an Athlon XP or P4. I'm not familiar with celeron arch though, would /etc/make.conf setup be the same for a celeron as a p4? Why would anyone buy a celeron with Athlons at 2Ghz rated athlons go

  • IMHO, the best thing about gentoo is the ease with which you can write an ebuild. This leads to a lot of software being supported and current. It's awesome. Have a small software package that's not in portage? spend 5 minutes on the ebuild and everyone can have it.
    A new version of some software was freshly released? sometimes all it takes is renaming the ebuild. I hope they don't complicate this anymore than it already is; it's their greatest asset.

    The question is: is this ease of packaging because the p

  • by aSiTiC ( 519647 ) on Saturday August 02, 2003 @05:32PM (#6596985) Homepage
    I didn't see anywhere in the story if the Gentoo installation was done from scratch stage1 or from stage3. I would think this would be a very important piece of information to mention.
  • by marienf ( 140573 ) * on Saturday August 02, 2003 @05:48PM (#6597062)
    Although I never actually *measured* anything, I have been moving all my boxen (except for one Duron on which I have found it quite impossible to compile Gentoo) to Gentoo 1.4rc4. I was actually in the process of building my own compile-in-place GNU/Linux called "Q-Gnu/Linux" when I discovered Gentoo did it all, and did it better. I was all RedHat before that (going so far as to wear a red fedora on parties - I have two of those). I find Gentoo as opposed to RedHat quite impressive, at least. My professional workhorse (on which I'm currently typing) is a Toshiba Satellite Pro 4300:

    model name : Celeron (Coppermine)
    stepping : 3
    cpu MHz : 597.077
    cache size : 128 KB

    ..with 384MB RAM.. and was becoming annoyingly slow in things requiring major GUI complexity, like OpenOffice, and at compiling many Java classes.

    Compiling Gentoo on there allowed the machine a third chance at life, the second one being when I got it (already old then) and installed RedHat on it, over that would-be-OS it came with. It just feels that much faster again. I am no longer annoyed by it at all. It took more than 4 days to compile all I wanted from the Gentoo 1.4rc4, but it was *well* worth it.

    I moved my personal little server, an Athlon Thunderbird, with the same impression. Currently running

    emerge system

    on my brand new Athlon XP 2600, expecting much from it.

    Bottom line: Nothing but Kudos for Gentoo, wondering what went wrong during the tests described, or whether somehow the subjective speedups I have experienced are just auto-suggestion. I think not. I have been staring at CRT's since 1980, thats 23 years folks! And I tell ya compiling stuff yourself is worth it. So if you have time on your side, go for LFS [linuxfromscratch.org], which I did, and slowly ground into Q-GNU/Linux. If you have some time, but not *that* much time, go for Gentoo [gentoo.org], if you have no time, you poor shmuck, either get a life, or install SuSe :-), and pretend.. :-) :-)..
  • by eWarz ( 610883 ) on Saturday August 02, 2003 @06:06PM (#6597136)
    They optimized Gentoo for the p3 platform? Celeron 1.4 ghz and above is based on the p4 core.
  • Moore's Law (Score:3, Insightful)

    by nagora ( 177841 ) on Saturday August 02, 2003 @06:24PM (#6597216)
    The time to install from source halves every 18 months. Already, entry-level systems can compile a Gentoo desktop system up from stage1 to everything-except-OO in a day (and OO can be installed as a binary) and a server can be up and running in a couple of hours so compile time is not a big deal and gets less so every day.

    Given how much better Portage is than any of the other management systems, I'd say Redhat is going to suffer big loses at the hands of Gentoo (Debian would too but the effect will be drowned out by the damage Debian is doing to Debian).

    So far I've converted six machines to Gentoo, all from Redhat because I couldn't face upgrading with RPMs anymore.

    TWW

  • by oobar ( 600154 ) on Saturday August 02, 2003 @07:57PM (#6597579)
    When I see things like the program time going from 39m 08s to 11m 21s (when all that was changed was a minor version number) that just screams -bad testing-.

    You should repeat every one of the tests a number of times, and make sure that you get the same (or similar) results each time. You should not NEVER expect a 4:1 ratio of performace doing the exact same task on identical hardware. Bells should be going off that say "casual testing" when you see something like that.

    Besides, there are so many variables that have to be kept the same between the different installs - which services are running, how they are configured, what kernel options are set, what patches have been applied to the kernel, which modules are loaded... If you pick up Redhat 9 and do a "kitchen sink" install, you will hardly have the same amount of free RAM for caching, etc. compared to doing the "regular" install of some other distro that leaves out things. Hopefully it's obvious that such a comparison that would not be fair at all.

    In short, you should take a given kernel source, with a fixed set of patches, options, settings, modules, etc., and complile it with the default i386 options and then a second time with all the fancy optimizaions, then compare those. LEAVE EVERYTHING ELSE THE SAME! Repeat with glibc.

    The results in this article are just pathetic. They vary all over the place and are crying out for more rigorous testing methods and procedures. Making a good test is really a science, you have to design the test to specifically measure what it is that you're interested in. For all we know one of those tests could have already had a majority of the libraries loaded into the disk cache, resulting in the huge performance differences.
  • by GMFTatsujin ( 239569 ) on Saturday August 02, 2003 @10:50PM (#6598153) Homepage
    The major benefit for me was that Gentoo was the first distro I'd used that gave me the slightest clue about what the operating system was doing, and how the software worked.

    I'd tried RedHat, Mandrake, and a few other distros that set everything up for me so I could "just use it." The problem was that in just using it, I had no idea what I was trying to use. I would go looking for software to do x, y, or z, and I'd either find nothing that seemed to do the job, or a jillion different apps that all did the job differently, and I didn't know why to pick one over the other. Add to that the sense of being at the wheel of an out-of-control car every time I wanted to make a change to a .conf file, and my Linux experience was pretty frustrating.

    Gentoo was a brilliant introduction into how to install a Linux-based OS. It started me off easy -- here's the command line, here are the commands to install the system, here are the .confs you can tinker with and what they do. It gave me flexability while keeping the results trim. The USE flag is the most amazing option I've ever seen.

    Installing Gentoo was more like playing with LEGOs than installing a system, and when I got done with it, I had a computer that I knew, really *knew*. I knew all the init.d services and what they did. I knew what module was controlling what hardware in my kernel *and* how to fix it if it didn't detect properly. I knew all the apps installed, even by their weird names and locations, and I knew what they were there for. I knew it because I built it that way. And I never had to hunt down a dependency or resolve a version conflict. NOT ONCE. Redhat and Mandrake just installed this mysterious Linux Stuff and threw the computer back at me when done. Gentoo got my hands dirty with building it up, but didn't make me jump through hoops to do it.

    The benefit was teaching me what my computer was doing when I used it.

    *THAT* is how I wanted my computer to run. And it does. Thanks, Gentoo team!

    GMFTatsujin
  • Enough Already... (Score:5, Insightful)

    by khyronmaetel ( 565342 ) on Saturday August 02, 2003 @11:32PM (#6598293)
    Ok, I'm a gentoo user. I'll admit a sizable percentage of our ranks dont know what they're talking about, i'll even admit that most distro "speed" is in the users head. But most of you are missing the point. Many gentoo users (including myself) installed gentoo as an ongoing learning experience. Sure, there's really no difference between the "l337ness" of typing emerge foobar and typing rpm -ivh foobar. But those of us who have taken the time to understand the portage system have learned a great deal. As an aspiring programmer, this was my distro of choice because it enabled me to learn about gcc. Also, i like the idea of (although most install standard packages) being able to beta test bleeding edge applications. While there are a lot of phoney gentoo users who are under the impression that theyre furthering the opensource movement by emerging packages, gentoo's backbone a highly active community of volunteers who are really interested in Open Source. Basically, all i'm trying to say is that any idiot can probably get gentoo installed and working, but the real point is to understand the OS that you've built, and i've found that gentoo helps me and others do this better than package based distros.

Genius is ten percent inspiration and fifty percent capital gains.

Working...