Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Software Linux

Measuring The Benefits Of The Gentoo Approach 467

An anonymous reader writes "We're constantly hearing how the source based nature of the Gentoo distro makes better use of your hardware, but no-one seems to have really tested it. What kind of gains are involved over distros which use binary packaging? The article is here."
This discussion has been archived. No new comments can be posted.

Measuring The Benefits Of The Gentoo Approach

Comments Filter:
  • Misses the point (Score:5, Interesting)

    by keesh ( 202812 ) * on Saturday August 02, 2003 @05:33PM (#6596725) Homepage
    The source-based thing isn't even why most people use gentoo. According to a recent poll on the gentoo-user mailing list, most people like it because of Portage (the package management system), with Customisation / Control coming in second (performance was third). Portage rocks. Even with the compiling, it takes less time to install some stuff (eg nmap) than it would take to locate the relevant .rpm. Of course, kde's a different matter, but with distcc compiling doesn't take too long.

    Having said that, it looks like the guys doing the testing got their CFLAGS wrong. Gentoo's performance should never be worse than Mandrake -- I reckon they forgot omit-frame-pointer. Also, the kernel compile is unfair, because gentoo-sources includes a whole load of patches that Mandrake and Debian don't.

    Finally, what's with measuring compile times? How is that a fair way of measuring performance? Hey, look, my distcc + ccache + lots of CPUs system with gcc3.2 can compile stuff faster than your single CPU gcc2 system... It's like comparing chalk and oranges.
  • Slow? (Score:5, Interesting)

    by peripatetic_bum ( 211859 ) on Saturday August 02, 2003 @05:38PM (#6596752) Homepage Journal
    Can this be correct. Debian turns out to he fastest?

    Anyway, I like the idea of gentoo, and I saw I a lot of Debian users head over to gentoo because the idea of controlling everything including the build was nice, however, I saw the gentoo idea also pretty much die, since a log of these users are power desktop users and not everyone could wait 3 days for X to build.

    What I like about debians packages is that if you do make a mistake you can always pretty much correct the package by fixing your souce list or goingt o packages.debian.org and getting the older working package and installing it manuaall with a simple dpkg -i old_package.deb.

    In gentoo, you had to rbuild to the whole thing, whihc with x coud take forever. And so what I saw gentoo suddenly doing was having a lot of pre-complied binaries start being provided by gentoo because they saw the problem with building taking forever, and so it kind of killed the whole idea of building for yerself, in which case, if you are going to stick with built pacakged why not have them maintained by some of the best developers around (ie debian)

    The othjer thing I noticed is that a lot of developers of software acutally use debian. I've noticed many a time that some cool software wa being made and the developers wouls provide source and they would provide a debian package and nothting else. Ie Debian appears to be the preferred developer's distro. In this I would like to hear discussion,.

    Thansk all
  • FreeBSD's ports (Score:4, Interesting)

    by Anonymous Coward on Saturday August 02, 2003 @05:39PM (#6596758)
    I don't use Gentoo (When I use Linux, I use Slackware), but I do use FreeBSD and its ports collection.

    Purported performance gains are one thing source packages give you (although I don't enable super optimizations because you never know when gcc bugs with -march=pentium4 -O3 or whatever will bite you).

    There are two major reasons I like installing from source, though. One is that you can customize the build to your system; lots of software packages have various compile time options, and when I have the source I can choose exactly how it's going to be built.

    Another thing is that when you install from source, you can hack the program to your heart's content. On my desktop box there are around 15 programs that I have to modify to get to act like I want (from simple things like getting cdparanoia to bomb immediately when it detects a scratch to halfway complex things like rewriting parts of klipper and XScreenSaver, which now picks a random screen saver on MMB and lets me scroll through all screensavers with the wheel =).

    I don't modify stuff on my servers, but I still get to choose exactly how things are built, which I very much enjoy.
  • by BrookHarty ( 9119 ) on Saturday August 02, 2003 @05:41PM (#6596768) Journal
    While the posts are starting, and people are saying Mandrake would never be faster. Lets go back earlier this year...

    Remember the KDE optimizations that where not included in the Gentoo source release? Everyone was wondering why KDE was faster on Mandrake. There where talk for over 2 months before people realized it was an option Mandrake was compiling with.

    Myself, Gentoo's biggest feature was the kernal compile options, adding patches for pre-emptive mulitasking, and improved responsiveness. I noticed the improvements on all my machines, but the compile times where a draw back. And sometimes the applications wouldnt compile.

    Mandrake while my favorite choice, doesnt include the best pre-emptive kernels. Which do make a noticable difference. So after installing mandrake, and putting a newer kernel on the system normally takes care of that.

    I'm just waiting till beta2 of mandrake cooker 9.2 with the 2.6 kernels, that should make Gentoo and Mandrake on par for speed.
  • Re:Misses the point (Score:5, Interesting)

    by arkanes ( 521690 ) <<arkanes> <at> <gmail.com>> on Saturday August 02, 2003 @05:42PM (#6596769) Homepage
    The key points to recognize from the article are: a) GNUmeric's performance sucks (8 minutes to open a file? I won't even think about the other version...) and b) that the CPU is not a signifigant bottleneck in modern systems. We all knew that. It's one reason why so many people are happy with binary packages, because the speed increase from saving some cycles generally isn't worth the extra time you lose compiling (as seen, in many cases it makes 0 difference).

    I would have liked to see some tests with things that are more CPU than IO bound, but, realistically, how often do you do those things in the normal case?

    If the main reason is to use portage for the convenience (same reason many people use debian), maybe they need to expand portage to support binary packages.

  • by shoppa ( 464619 ) on Saturday August 02, 2003 @05:42PM (#6596770)
    I don't use Gentoo, but I do use Linux From Scratch [linuxfromscratch.org], and I do see substantial improvements with command-line type activities: A kernel build on a Athlon is about 20 percent faster when I do it with a custom LFS build vs a stock RedHat installation.

    Most of the comparisons in the article were for X-related graphics applications, and while they were comparing the versions of the applications, they were not comparing the libraries underneath them (glibc, X11, and probably the window manager too come into play) and they should've compared versions there too. It becomes complicated because for a typical X11-based app there are probably several dozen libraries involved (in addition to all the configure-time options for them...)

  • Why use it? (Score:3, Interesting)

    by Realistic_Dragon ( 655151 ) on Saturday August 02, 2003 @05:42PM (#6596771) Homepage
    I picked Gentoo because it was Free and free, and because emerge has IME one big advantage over APT - one well updated, consistant, all encompasing, repositry.

    OTOH my laptop runs RedHat, because I needed at least one machine running it to stay current with where they dump configs (it's the distro they use at work). Coupled with Apt-RPM it's competent enough, and I have no major problems with the performance.

    So yeah, I have to agree with the article - you may like it one way, others may want to do theit own thing. No matter what you chose, you (probably) have binary compatibility, so who gives a sh!t about the holy wars, just as long as you aren't running Windows :D
  • Re:Misses the point (Score:3, Interesting)

    by tweek ( 18111 ) on Saturday August 02, 2003 @05:42PM (#6596772) Homepage Journal
    Well in all fairness, ccache only does any good after the first compile. The distcc option however does make a difference.

    I will agree that the biggest thing for me with gentoo is actually being able to strip stuff out of an install with a simple USE flag. I actually prefere to build things myself but having a package management system that takes care of dependencies for that is a godsend.
  • by grotgrot ( 451123 ) on Saturday August 02, 2003 @05:46PM (#6596800)
    I tried Gentoo for a while and eventually gave up. The problem is that you still have dependency hell. Most packages look for stuff at compile time, and many have optional components. For example a video player may not include support for QuickTime unless the libraries are already on there at compile time.

    So the fun starts when you start installing stuff, they don't include support for other components because they weren't there at compile time, you then discover the missing support, have to install the missing libraries and then recompile every package.

    This is an especially big issue with multi-media stuff, and gets many layers deep as some libraries have optional components depending on other optional components.

    About the only way to guarantee a fully uptodate system is to keep doing complete recompiles of the entire system until there are no changes.
  • by Captain Kirk ( 148843 ) on Saturday August 02, 2003 @05:48PM (#6596806) Homepage Journal
    I dual-booted Debian and Gentoo thinking I would migrate completely to Gentoo for desktop use and Debian for servers. Galeon on Debian was way faster. In the end, I got fed up of compiling and re-compiling X and stuff trying various gcc switches. Debian is fast enough to make sitting about wiating for stuff to comlile a waste of time. And apt-get is every bit as good as emerge.
  • by Enahs ( 1606 ) on Saturday August 02, 2003 @05:48PM (#6596811) Journal
    For me, Gentoo is a great choice partially because I like the control and partly because I use crufty hardware that doesn't fall into any predefined (read: Intel) category.

    Try using binaries compiled for an i686 on a Via C3-1G, for example.

    Yes, if your entire reason for using Gentoo is to have control over how apps are built, starting from stage3 pretty much defeats the purpose, and yes, if you don't know what you're doing, then rebuilding X can be a real drag. However, I have to say that I appreciate the fact that Gentoo manages to avoid a lot of legal issues by having the user build the packages her/himself. Honestly, I'd love to be Ogg Vorbis-only for music on my computer, but when I own a portable MP3 player, an MP3-capable DVD player, an in-dash MP3 player, and use OS X at work where QuickTime Ogg Vorbis support is dodgy at best, I want lame. And I want lame support built into kdelibs or whatever lame support needs to be built into so that I can drag-and-drop 192kbps ABR MP3s from an audiocd:// ioslave window to my mp3 folder. ;-D

    My own experience has been that Gentoo outperforms Debian on my hardware, but only after I've done some tweaking on Gentoo. YMMV.

  • One more thing (Score:5, Interesting)

    by Enahs ( 1606 ) on Saturday August 02, 2003 @05:55PM (#6596833) Journal
    From the article:

    Upon testing with hdparm, it was apparent that this machine was having troubles setting above udma2. Eventually this problem was traced to the HD cable, a salutary lesson in the variability of identical hardware setups.

    Very telling pair of sentences.

  • Re:Misses the point (Score:5, Interesting)

    by antiMStroll ( 664213 ) on Saturday August 02, 2003 @06:08PM (#6596879)
    What's missing in the article is the second half of Gentoo's compile options, the /etc/make.conf USE variables. CFLAGS determines CPU architecture, USE adds or removes the options for extra software support. In stock form Gentoo compiles binaries with a huge number off add-ons, including support for KDE, Gnome, framebuffer, etc. From make.conf: " USE options are inherited from /etc/make.profile/make.defaults." The list from a current Gentoo 1.2 looks like:

    USE="x86 oss 3dnow apm arts avi berkdb crypt cups encode gdbm gif gpm gtk imlib java jpeg kde libg++ libwww mikmod mmx motif mpeg ncurses nls oggvorbis opengl pam pdflib png python qt quicktime readline sdl slang spell ssl svga tcpd truetype X xml2 xmms xv"

    Without knowing what support Debian or Mandrake used to compile binaries, this is still an apple/oranges comparison. My notebook isn't configured to compile with KDE or Gnome extensions because the hardare is too old and I use Fluxbox. Mandrake and Debian may still turn out faster (the Gentoo Mozilla e-build was legendary for being slow), but that's not quiet yet proven.

  • by 0x0d0a ( 568518 ) on Saturday August 02, 2003 @06:17PM (#6596922) Journal
    Having said that, it looks like the guys doing the testing got their CFLAGS wrong. Gentoo's performance should never be worse than Mandrake -- I reckon they forgot omit-frame-pointer.

    Omit-frame-pointer is not a regular optimization. Working without stack traces to hand to a developer if you have a problem isn't really a reasonable optimization unless you're doing something like an embedded system, where you couldn't get at the stack trace anyway.

    This is *exactly* what the real tech-heads have been saying for years, what my tests confirm, etc. A minor change in a couple of compile flags above -O2 almost *always* makes very little difference. Compiling your own packages really just plain doesn't matter. Maybe if gcc really was incredibly tuned to each processor, but certainly not with the current compiler set.

    Also, the kernel compile is unfair, because gentoo-sources includes a whole load of patches that Mandrake and Debian don't.

    And perhaps the inverse is true, too?

    Look, the point is, Gentoo is not significantly faster than any other general distro out there. If you use it, it's because you like their tools or packaging scheme. You aren't cleverly squeezing out more performance.

    Oh, and last of all, I've seen compiler folks saying that it's not that unusual for -O3 to perform worse than -O2. When I was taking our cache performance analysis bit in university, cache hits and misses really *is* the dominant factor in almost all cases. Loop unrolling and function inlining can be a serious loss.

    Finally, compiling for different architectures generally makes very little difference on any platform other than compiling for i586 on a Pentium. The Pentium runs 386 code rather slowly. The PII and above will happily deal with 386 code.
  • Re:Misses the point (Score:5, Interesting)

    by cperciva ( 102828 ) on Saturday August 02, 2003 @06:20PM (#6596933) Homepage
    first, I'd love to see a distro be faster than "up2date package_name" or even "aptget package_name".

    FreeBSD Update [daemonology.net]. Ok, it only upgrades the base FreeBSD install, starting at binary releases, along the security branches; but it uses binary patches [daemonology.net] to dramatically cut down on the bandwidth usage (and therefore the time used). A typical install of FreeBSD 4.7-RELEASE (released in October 2002) has 97 files totalling 36MB bytes which need to be updated for security reasons; FreeBSD Update does this while using under 1.6MB of bandwidth.
  • Re:Misses the point (Score:5, Interesting)

    by buchanmilne ( 258619 ) on Saturday August 02, 2003 @06:21PM (#6596942) Homepage
    Portage rocks.

    If you have a fast processor. My Duron 800 can keep itself busy for a weekend compiling OpenOffice.org ...

    Even with the compiling, it takes less time to install some stuff (eg nmap) than it would take to locate the relevant .rpm.

    This is on my Thinkpad 600X, which is a 500 PIII/192MB, with a pretty slow disk:

    [root@bgmilne-thinkpad mnt]# rpm -q nmap
    package nmap is not installed
    [root@bgmilne-thinkpad mnt]# time urpmi nmap
    installing /var/cache/urpmi/rpms/nmap-3.00-2mdk.i586.rpm

    Preparing...
    #some hashes replaced to fool the lameness filter#
    1:nmap
    #some hashes replaced to fool the lameness filter#
    5.34user 1.36system 0:26.76elapsed 25%CPU (0avgtext+0avgdata 0maxresident)k
    0inputs+0outputs (1712major+8394minor)pagefaults 0swaps


    You would need quite a system to beat 26s I think.

    Also, the kernel compile is unfair, because gentoo-sources includes a whole load of patches that Mandrake and Debian don't.

    From the article:

    "The same 2.4.21 source was copied to all machines and compiled using the same options. However, it should be noted that the Debian system used gcc 3.3.1 whilst the Mandrake and Gentoo installations used gcc 3.3.2 ."

    I don't see the point of:
    -not using the default compiler on the system
    -if you don't use the default compiler on each machine, at least use the same compiler across them all

    But, otherwise, the comparison looks pretty fair.
  • by Enahs ( 1606 ) on Saturday August 02, 2003 @06:23PM (#6596947) Journal
    0 is where it should be. Other distributions run X at a higher priority to make up for "vanilla" 2.4's crappy interactive performance. Once 2.6 is out, this won't be an issue anymore. Also Gentoo's gaming-sources and ck-sources (my personal favorite) have optimizations built in that improve interactive performance greatly, eliminating the reason for running X at a higher priority. Some people report having problems with X running at higher priority; I never have (some people have problems with soundcard starvation, among other things) but then again I don't have to worry.



    To be fair, I'm running RH9 until sometime tonight (have a chrooted Gentoo build waiting to be installed) but I'm running a Planet CCRMA kernel, which includes a number of the ck-sources patches.

  • Re:Slow? (Score:3, Interesting)

    by ctr2sprt ( 574731 ) on Saturday August 02, 2003 @06:29PM (#6596974)
    Yeah, I was a Debian user and tried Gentoo when it first came out. I'd used FreeBSD, so I knew and loved the ports library, so I was excited about Gentoo. Unfortunately, the initial releases seemed broken in several severe ways. Half the software in Portage wouldn't compile at all, and I didn't really feel like digging into the source to find out why. I'm not some idiot newbie, I'm a computer programmer and have been using Unix for nearly 10 years. I just wanted the install to work, and it wouldn't, no matter what I tried.

    Anyway, obviously Gentoo has improved since then, but this is a concern of their Portage system (developers accidentally breaking parts of it, and if those parts happen to be gcc, look out!). It happens in FreeBSD every now and then, but it's not as big a deal there since the BSDs use actual releases: the ports are all tested against a specific release and verified to compile, then the ports library is frozen until the next release comes out (which will be tested similarly). So here's a question buried in this rambling anecdote: does Gentoo provide a way of getting "stable" ports, or is the entire OS like the "unstable" branch of Debian?

  • by BrookHarty ( 9119 ) on Saturday August 02, 2003 @06:29PM (#6596976) Journal
    My own experience has been that Gentoo outperforms Debian on my hardware, but only after I've done some tweaking on Gentoo. YMMV.

    How true, I wish we had a 3dmark type program for Linux, where we could test X performance in 2d/3d, audio, hd, cpu, mem and even latency for each area. Maybe even report performance and features for OpenGL, to see what the drivers do and dont support.

    A good benchmark program could be used to see if newer kernels are really faster (ie 2.6) or even those nice pre-emptive kernels patches.

    Thats one part lacking in Linux/unix, hardware testing/benchmark programs. Not counting FS benchmarks, there are handfuls on freshmeat.

    Thou, maybe a benchmark program would make Linux look slower to windows in the desktop area. (Any idea?)
  • Re:Misses the point (Score:5, Interesting)

    by dougmc ( 70836 ) <dougmc+slashdot@frenzied.us> on Saturday August 02, 2003 @06:39PM (#6597028) Homepage
    the CPU is not a signifigant bottleneck in modern systems.
    Are you on crack? Even today, the CPU speed is a signifigant bottleneck for many operations.

    Now, the cpu speed has increased by a larger factor than memory speed and disk speed over the last few years, but it's still quite a large bottleneck in many operations, including the ones tested in this (admitedly lacking) test.

    Even the speed of a kernel compile, which is often given as a classic `disk I/O bound' process, is extremely CPU bound. How do I know? Running `top' on an idle box shows 0% cpu utilization. Once I start the compile, it goes to well over 90% cpu utilization and stays there until the compilation is done. (just to be complete, I'm testing this on a dual p3 700 box with SCSI disks, doing a `make -j2'. But even my 2ghz Athlon computer with IDE disks works similarly.)

    Perhaps I'll do some tests with adjusting the CPU multiplier on a given box, see how that affects compilation times. That would be an excellent test ...

  • by Sloppy ( 14984 ) * on Saturday August 02, 2003 @07:39PM (#6597276) Homepage Journal
    The reason for getting away from RPMs is when you see some software that you want, download the RPM, and you can't install it because it wants some other RPM that you don't have. Or worse, it refers to an RPM for a package that you do have, but it's a different version.

    If you don't have that problem, then stick with what you have. Your life is good. Be happy.

    (Back when I used Mandrake and Red Hat, I had that problem all the time and it was very frustrating.)

  • Re:Misses the point (Score:3, Interesting)

    by be-fan ( 61476 ) on Saturday August 02, 2003 @07:55PM (#6597343)
    I second that. I don't use Gentoo because I think I can get a minscule 0.5% extra performance by compiling myself, but because of Portage, and the Gentoo community. Portage is awesome, and the source-based nature means that ebuilds come out extremely quickly, and are less subject to distro-specific "customizations" (read: quirks) than binary packages. All the ebuilds BreakMyGentoo.net, as well as the ebuilds posted to the bug tracker are a phenomenal example of how the power of portage allows a relatively small community to maintain a large software library. The Gentoo community is also one of the nicest out there (forums.gentoo.org, rocks).
  • Re:Misses the point (Score:4, Interesting)

    by loginx ( 586174 ) <xavier&wuug,org> on Saturday August 02, 2003 @08:11PM (#6597418) Homepage
    I've read the article very carefully and I've also looked at the people who wrote it...

    Real-world benchmarks is a serious matter, I don't even know why this article made its way into /.

    It's rather obvious that those people were not familiar with CPU optimizations and were not thorough with the benchmarking considering they didn't even bother to check for the version/revision of the base package of the distros they were working with even though they do admit that even minor revisions play a considerable role with performance.

    1) What are the version/revisions of GCC on those machines? A package compiled and optimized with GCC 3.3 or even the 3.4 beta will obviously offer much better performance than an old deprecated gcc 3.2 that will be installed by gentoo by default if you haven't set up your ACCEPT_KEYWORDS

    2) What hard-drive optimization did they set-up? Distros like Mandrake, Debian or Redhat set up HDParm optimizations after the first install, gentoo barely does... That would already make a big difference while opening a 32,000 lines spreadsheet...

    3) What are the versions/minor revisions of the Gnome window manager on all those boxes? and GTK? Those packages provide the controls and rendering for Gnumeric... having any difference in these is not fair-play either... (try to install the same version of Gnumeris on a redhat 9.0 and a redhat 7.2 and see if the performance is the same)

    To get back on the example of the sports-car race, this is kind of like benchmarking a porche and a ferrari, but you put diesel in the ferrari and forget to inflate your tires...

    Basically, if you don't have experience using gentoo on a system for a while and know how to optimize your system, don't go and say that the optimizations don't work... they work perfectly well for me but I couldn't see a difference in my first 2 weeks of using gentoo... it's something you have to learn... you can't just install it and pretend the distro will self-optimize for you...it's not even supposed to.
  • Re:Misses the point (Score:4, Interesting)

    by realdpk ( 116490 ) on Saturday August 02, 2003 @08:17PM (#6597431) Homepage Journal
    Try -static - on FreeBSD it can improve performance for very fast running binaries significantly (such as ones designed to run 100s of times/s). I dunno about on Linux (although I've noticed that Linux seems to prefer dynamic binaries for everything.)
  • So is linux (Score:3, Interesting)

    by r6144 ( 544027 ) <r6k&sohu,com> on Saturday August 02, 2003 @08:31PM (#6597489) Homepage Journal
    Especially when you link to a lot of libraries, dynamic linking can often take a lot of milliseconds. Prelinking helps a bit, but static linking is the fastest. On my machine running a simplest program takes 5ms when dynamically linked, 3ms when statically linked, "user time" is 1.1ms vs. 0.4ms.
  • Re:Just wondering... (Score:1, Interesting)

    by Anonymous Coward on Saturday August 02, 2003 @08:45PM (#6597532)
    In real life, Gentoo is faster!

    Sorry, thats meaningless. It doesn't matter how it feels to you. There are many, many things affecting that. When you get a Gentoo system into a usable state (ie, the userland, kernel, and boot loader are functional, so you can reboot the system and install software) it has basically nothing installed. You manually install every single thing you want, and there are next to no services running. Now even with slackware, if you check one of the default install options, it will install a lot of server software that strictly speaking you probably don't need, and all of that will be running at startup. And yeah, it might feel faster without that stuff running. Has nothing whatsoever to do with Gentoo. Also their kernel has all sorts of patches for preemption, etc. applied which Slackware will not touch until they are well-tested and part of the default kernel. That will make it seem faster too, but has nothing to do with Gentoo per se.

    Can you see that this test has a level of objectivity beyond your personal experiences? Each distribution was installed on the same hardware and running the same kernel, and everything else was in a bone-stock configuration. They ran these tests, and those numbers are what you see. Sure you will whinge, because it invalidates the several days you spent building your entire system from source, but if you want to complain about this test you are going to have to come up with a superior methodology which produces different results, not just personal anecdotes.


    You could say Gentoo is like LFS but with very good packaging system (this is what LFS lacks) and it's much more easy to manage.


    No. Its not "from scratch" at all in the sense that all meanigful steps in the installation are automated, and the remainder are formulaic enough that you can understand why most distributions script the installation. The benefit of LFS is that, by installing each piece yourself and attending to every bit of configuration, you gain a deeper understanding of how the system works. You get next to none of that from Gentoo, except the massive time requirement.
  • by oobar ( 600154 ) on Saturday August 02, 2003 @08:57PM (#6597579)
    When I see things like the program time going from 39m 08s to 11m 21s (when all that was changed was a minor version number) that just screams -bad testing-.

    You should repeat every one of the tests a number of times, and make sure that you get the same (or similar) results each time. You should not NEVER expect a 4:1 ratio of performace doing the exact same task on identical hardware. Bells should be going off that say "casual testing" when you see something like that.

    Besides, there are so many variables that have to be kept the same between the different installs - which services are running, how they are configured, what kernel options are set, what patches have been applied to the kernel, which modules are loaded... If you pick up Redhat 9 and do a "kitchen sink" install, you will hardly have the same amount of free RAM for caching, etc. compared to doing the "regular" install of some other distro that leaves out things. Hopefully it's obvious that such a comparison that would not be fair at all.

    In short, you should take a given kernel source, with a fixed set of patches, options, settings, modules, etc., and complile it with the default i386 options and then a second time with all the fancy optimizaions, then compare those. LEAVE EVERYTHING ELSE THE SAME! Repeat with glibc.

    The results in this article are just pathetic. They vary all over the place and are crying out for more rigorous testing methods and procedures. Making a good test is really a science, you have to design the test to specifically measure what it is that you're interested in. For all we know one of those tests could have already had a majority of the libraries loaded into the disk cache, resulting in the huge performance differences.
  • by hoddi ( 654676 ) on Saturday August 02, 2003 @09:09PM (#6597625)
    Once I set out to prove that wrong. My test was purely cpu bound, to see the real benfit from the omptimization. I have since lost my resaults ... but you can do it yourself ... Here is what I did time cat /dev/kcore | gzip -f > /dev/null My resaults showed that whith correct optimization I would gain upto 10% increse in throughput. But, in the real world, the gain would be less.
  • by Arker ( 91948 ) on Saturday August 02, 2003 @09:12PM (#6597632) Homepage

    The odd thing is that, from what I've read, a lot of Gentoo folk seem to be trying to compile everything with -O3. This is, frankly, bloody stupid. This turns on a lot of 'optimisations' that are only useful on a few programs and actually harmful for most, and is probably one of the reasons it looked bad in this test.

    O1 is the safe level of optimisation. Even O2 runs the risk of doing more harm than good, although it's a fairly low risk. O3 runs a very high risk of doing more harm than good. In many cases Osize is probably the best option anyway, because I/O is more commonly the bottleneck than cpu capability.

    And the processor optimisations can also be risky. Every processor out there is designed to run commercial i386 code as fast as possible anyway.

    The big wins in compiling yourself are control of configure options, not compiler optimisations. I think source-based distributions are a great idea, but I have to wonder if most people using them right now are getting the benefits.

  • by GMFTatsujin ( 239569 ) on Saturday August 02, 2003 @11:50PM (#6598153) Homepage
    The major benefit for me was that Gentoo was the first distro I'd used that gave me the slightest clue about what the operating system was doing, and how the software worked.

    I'd tried RedHat, Mandrake, and a few other distros that set everything up for me so I could "just use it." The problem was that in just using it, I had no idea what I was trying to use. I would go looking for software to do x, y, or z, and I'd either find nothing that seemed to do the job, or a jillion different apps that all did the job differently, and I didn't know why to pick one over the other. Add to that the sense of being at the wheel of an out-of-control car every time I wanted to make a change to a .conf file, and my Linux experience was pretty frustrating.

    Gentoo was a brilliant introduction into how to install a Linux-based OS. It started me off easy -- here's the command line, here are the commands to install the system, here are the .confs you can tinker with and what they do. It gave me flexability while keeping the results trim. The USE flag is the most amazing option I've ever seen.

    Installing Gentoo was more like playing with LEGOs than installing a system, and when I got done with it, I had a computer that I knew, really *knew*. I knew all the init.d services and what they did. I knew what module was controlling what hardware in my kernel *and* how to fix it if it didn't detect properly. I knew all the apps installed, even by their weird names and locations, and I knew what they were there for. I knew it because I built it that way. And I never had to hunt down a dependency or resolve a version conflict. NOT ONCE. Redhat and Mandrake just installed this mysterious Linux Stuff and threw the computer back at me when done. Gentoo got my hands dirty with building it up, but didn't make me jump through hoops to do it.

    The benefit was teaching me what my computer was doing when I used it.

    *THAT* is how I wanted my computer to run. And it does. Thanks, Gentoo team!

    GMFTatsujin
  • by 0x0d0a ( 568518 ) on Sunday August 03, 2003 @01:20AM (#6598465) Journal
    Look, Squinky86. I'm not simply pulling this out of my ass. I've had to cover this back in university. I'm not a gcc developer, but I have sat down and gone through generated assembly from the compiler, and have spent many hours tweaking software in every way possible to make it run at a reasonable rate on my old P2/266. There are a very few pieces of software for which arch flags make a measurable difference (as a later poster noted, gzip is one). As for individually specifying flags, I'd be facinated to know what you're trying to use above -O3. You can use -ffast-math. It's unlikely to provide particularly useful results. Approaches like this have been done for a long time (see libmotosh on the PowerPC) -- they can cause the rare, PITA to find problem, and any libs or programs that really need the speed increase have probably done custom work that's even faster than anything you're going to pull with ffast-math (libfftw, for instance). You *might* get some performance gains on povray...but most folks I know compile povray themselves anyway, since it isn't packaged by, say, Red Hat. -fstrict-aliasing is a *very* balsy flag to use if you haven't actually written the software yourself. It's a pretty safe bet that building a lot of unknown software with -fstrict-aliasing will break it. There's a good reason strict aliasing is off by default -- valid C programs (easy ones to write, too) will die with this option, occasionally and in odd ways. You're a damned fool if you use this on *any code* that you did not write or explicitly says that it was written to allow this optimization. I just finished talking to an optimizing compiler designer Thursday who reinforced my feelings about aliasing-dependant optimizations -- they're almost always a bad idea, since the small speedup isn't worth the random problems that you can very very easily induce. -fomit-frame-pointer can produce a small benefit, but surprisingly small, and makes tracking down any crashing bugs or requesting help with a crashing bug infeasible.

    If you know how to and properly configure your compiling flags, the speed gains are tremendous

    Bullshit. The vast majority of software I've run benchmarks on will never see less than a 10% performance gain (and that's being *very* generous...most will see no measurable change) with anything other than the default -O2 or -O3.

    Oh, hell. I was a lot like you not so many years ago, sure that I could speed things up if I just found the right ways to manipulate the code. The only cure for it is actually sitting down and benchmarking things yourself, since you're sure that everyone else is doing something wrong.

    Go ahead, you'll see what I mean. Try building libs that tie up a lot of CPU cycles in multiple apps like libjpeg -- that's where you're going to see your best payback for any optimizations. Time a couple runs.

    but if you just try gentoo, the learning experience and speed gains are very noticeable.

    I think I've adddressed speed gains. As for learning experience, Gentoo is not synonymous with compiling software from source (from that standpoint, Slackware and similar distros blow Gentoo away). I've never bought into the "learning experience" claims -- let folks start out on the GUIs their distro maker provides, and then, regardless of distro, you can quite happily find out what's going on.

    This is not to say that I don't think Gentoo is a worthy distro. I'm a bit of a package management aficiado, and emerge certainly interests me. However, the kind of sweeping claims I see the occasional Gentoo user make on Slashdot are ridiculous. The general-purpose Linux distros are all fairly close together. Distro fans tend to be produced when someone fails to understand how to properly use a different distro (or got accustomed to one), or has sucked down some false claims from other folks, or just don't want to consider that the distro they've sunk lots of time into learning isn't far better than any other choice.

    Now, if you happen to like Gentoo, go for it. But like it for the legitimate reasons, not inflated false ones. Don't make exaggerated claims WRT to it, because misinformation certainly doesn't help out Linux folks in the long run.
  • Re:Catch 22 (Score:2, Interesting)

    by stang7423 ( 601640 ) on Sunday August 03, 2003 @01:43AM (#6598530)
    For those of you installing gentoo on slow hardware here are your installation instructions:

    1. %emerge system
    wait 24 hours...
    2. %emerge "all packages you want"
    wait 24 hours...
    3. Profit ??? your system is complete.

    For those of you that say compiling on slow hardware isn't a worthwhile investment of your time, stop watching every line of code compile. Your computer is a big boy and he/she can operate for hours on end without you looking over its shoulder.

  • Re:Misses the point (Score:1, Interesting)

    by Anonymous Coward on Sunday August 03, 2003 @02:01AM (#6598574)
    -Os with shared objects is more about reducing both page faults and cpu cache misses by reducing size for situations where many applications are running on cheap hardware and as you said -static is rather usefull for one application eating all cpu
  • Re:Misses the point (Score:3, Interesting)

    by Natalie's Hot Grits ( 241348 ) on Sunday August 03, 2003 @02:42AM (#6598668) Homepage
    Seriously though, its not. MPEG4 encoding is SERIOUSLY cpu dependant, not bandwidth. Just look at the benchmarks of the opteron vs Pentium4 on 333MHz memory bus and you will see what I mean (in mpeg4 encoding). i'm not saying the opteron is inferior for mpeg4, but the fact that MPEG4 (at least on windows) processing is highly SSE2 optimized for the P4.

    (hint: P4 beats the opteron, even though the opteron has a higher real world memory bandwidth and less memory latency and larger cache)

    Yea, memory bandwidth is the bottleneck in just about everything every day users are doing. But for specific tasks such as video codecs, CPU speed is a huge bottleneck. Moreso than memory bandwidth.
  • by 0x0d0a ( 568518 ) on Sunday August 03, 2003 @02:45AM (#6598673) Journal
    They need a hardcore gentoo optimizer in there for the gentoo box, someone that knows what they're doing....Hence, I say the person doing the gentoo install is NOT informed on optimizing gentoo and thus renders this test invalid.

    [sigh] Nobody ever listens to me [Dark City].

    Okay, let's take a look at how informed you are. First of all, -mathlon-xp implies -m3dnow, -msse, and -mmmx. -O2 or -O3 is default already on most systems. The only differences -O3 produces is -frename-registers (which does essentially jack on the x86 line) and inlining (which tends to produce very minimal or negative benefits, given the fact that cache misses (which this aggravates) are far more of a timesink for most programs than setting up and returning from function calls). -pipe produces no runtime benefit, though I leave it in my own flags. -fforce-addr, -frerun-cs-after-loop, and -frerun-loop-opt are implied by -O2 or -O3 already. -falign-functions=4 is considered a slowdown for the Athlon line by the gcc team relative to the default (64 on current gcc). I haven't tested -maccumlate-outgoing-args, and I'm not familiar with what it internally does -- the only benchmark I could google for indicated a slowdown caused by it. -ffast-math is a decision of dubious value. Very little code uses floating point math, so ffast-math rarely has an effect. The code that does and actually cares about this degree of performance generally has native implementations that are faster than -ffast-math, since they're special-cased. This can cause software breakage. (We already saw the realization that these sorts of optimizations are of dubious value with Motorola's LibMotoSh for the PPC). -fprefetch-loop-arrays is implied by -O2.

    The overwhelming majority of code does *not* have an #ifdef __SSE__ with alternate code.

    Basically, the only seriously useful flag you used is that which all distros use -- -O3 (and I generally feel that -O2 is a better choice on modern processors, where cache is so critical). -march=athlon-xp can help, but it's unlikely to make a measurable difference on any but a very few pieces of software. Most distro vendors already benchmark and ship versions of software that benefits with a different arch -- look at RH's different RPMs. -fomit-frame-pointer is arguable -- but you're going to probably see *well* under a 10% performance difference, and you have no ability to track down crashing bugs or send in useful bug reports. -ffast-math can cause breakage, and provides little benefit for almost any package (one exception is povray -- it's a floating point heavy package that tries to be portable). I custom-build povray, but then RH doesn't package povray anyway, so that's not too much of a concern.

    Anyway, my point is not to criticize you. I spent my days with a three line CFLAGS string as well, sure that I was producing nicer and better code than anyone else. Then came my benchmarking days and a compiler class and some days picking apart gcc-generated assembly...and I realized that I really wasn't gaining anything.

    If you like Gentoo, do it for the legitimate features that it provides (like emerge), and not for some fanciful performance improvements.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...