Forgot your password?
typodupeerror
Sun Microsystems Operating Systems Software Unix Linux BSD

Benchmarks For Ubuntu vs. OpenSolaris vs. FreeBSD 131

Posted by timothy
from the three-way dept.
Ashmash writes "After their Mac OS X versus Ubuntu benchmarks earlier this month, Phoronix.com has now carried out a performance comparison between Ubuntu 8.10, OpenSolaris 2008.11 and FreeBSD 7.1. They used a dual quad-core workstation with the Phoronix Test Suite to run primarily Java, disk, and computational benchmarks. The 64-bit build of Ubuntu 8.10 was the fastest overall, but FreeBSD and OpenSolaris were first in other areas."
This discussion has been archived. No new comments can be posted.

Benchmarks For Ubuntu vs. OpenSolaris vs. FreeBSD

Comments Filter:
  • by aliquis (678370) <dospam@gmail.com> on Tuesday November 25, 2008 @11:06AM (#25886965) Homepage

    Various versions of GCC. While one could argue that the compiler is part of the OS it's indeed replaceable so I would had prefered if they had used the same version of GCC and not different for each OS.

    It would had been very interesting to see the Solaris results using Sun Studios CC as well (I think it's also available for Linux nowadays?)

    • by Just Some Guy (3352) <kirk+slashdot@strauser.com> on Tuesday November 25, 2008 @11:58AM (#25887733) Homepage Journal

      While one could argue that the compiler is part of the OS it's indeed replaceable so I would had prefered if they had used the same version of GCC and not different for each OS.

      I'm a huge FreeBSD fan. However, I don't have a problem with them testing the compiler as it was shipped with the OS because it's the one officially supported. Since that's the compiler that 99.9% of FreeBSD users will have, that seems like a reasonably fair baseline comparison.

      Similarly, they explicitly state that "[a]side from changes made by the Phoronix Test Suite (and adding the GNOME packages to FreeBSD), all operating systems were left in their default configuration." I'm sure all of them could be tuned for higher performance on this benchmark, but I think out-of-the-box numbers are valuable.

      • by aliquis (678370)

        At least earlier I think you could switch in rc.conf or similar if you wanted to use gcc 3.x or 2.95.x for instance, both may have been available, I don't remember. I don't think that is much of tuning, not much more than trying to benchmark KDE by different versions, or even various versions of Windows on different laptops.

        If it's about the compiler version (or options) I know I can switch compiler and get a better result, if it's some other tweaking or totally different kernel or system libs or such it ma

      • by sgt scrub (869860)

        but I think out-of-the-box numbers are valuable.

        I couldn't agree more. I like Ubuntu. It is a good distro. But "out of the box" is exactly what a distro is. Therefore it should be judged on its out of the box abilities. People looking at this as a comparison of operating systems are way off. If they wanted to compare the three at their optimum they would have used Linux from scratch, and the equivalent for the others, tweek'd the living crap out of them, then ran them over the cliff. So IMHO that art

        • by epine (68316)

          I couldn't agree less.

          As a practical matter "out of box" is synonymous with lowest common denominator.

          The dominant criteria in tuning OOB is that the OS be tolerable for almost any conceivable workload within the capability of the machine.

          If you are also able to tweak performance for commonplace tasks (such as the LAMP stack) *without* compromising the former goal, you can do that too, to some degree. Bear in mind not to turn on any performance features that are poorly documented, have surprising edge case

    • by ByOhTek (1181381)
      True, though the only one I think that *really* hurts is sun, since it doesn't have all of the GCC 4 optimizations, and they still handled themselves quite well. My issue is with using FreeBSD 7.1 Beta 2. They should have stuck with 7.0 (the release edition). Usually the Betas and RCs tend to be worse-performing than the final releases (at least to my knowledge with FreeBSD history, I remember that usually being the case).
      • by aliquis (678370)

        That's why Sun cc would had been fun in case it had owned gcc =P

        • by ByOhTek (1181381)
          but if you want to go on that route, a lot of changes could be made: - FreeBSD gets much better performance on Intel than AMD. - Were packages or ports used for software installation? I can think of a few more, some affecting all systems, some just one or two. Regardless, these are changes to the base system, and this is a base system test. Those are questions for another test altogether. The proper questions here are if the base system used is the proper one for a valid test.
      • by LizardKing (5245)
        I've never used Sun compilers on x86, but on SPARC they produce binaries that are much smaller and faster than those produced by GCC. The same is true on x86 when using the Intel compiler rather than GCC, so it may be down to greater tuning for a single architecture.
      • by Fweeky (41046)

        My issue is with using FreeBSD 7.1 Beta 2. They should have stuck with 7.0 (the release edition). Usually the Betas and RCs tend to be worse-performing than the final releases

        Beta and RC should be pretty much as performant as the final release; there's no magical change in CFLAGS or debugging options for a release.

        On CURRENT, yes, there's WITNESS and INVARIANTS which drastically reduce performance because it's constantly checking every locking operation to make sure things are happening in the right order, verifying kernel data structures and so forth. This doesn't apply to STABLE, PRERELEASE, BETA or RC releases.

    • Even with the same version, I would be concerned that optimization is better for linux than other less used (as far as GCC) OSes.
  • by Deagol (323173) on Tuesday November 25, 2008 @11:20AM (#25887167) Homepage

    I was a bit disappointed by the results, being a FreeBSD fan myself. However, in my quick scan of the article, I didn't see any mention of how they configured the OS. If they truly used the stock 7.1-BETA2 install, that would mean that debugging mode is enabled in the kernel (and maybe the userland, I'm not 100% sure here). Unless I've misunderstood FreeBSD's release methods over the years, they don't disable the debugging until either the RC builds or maybe even the final release tag.

    Still, FreeBSD came out on top on 3 of the tests -- not bad for a beta release. I can't wait for 7.1, as using 7.0 on my desktop since its release has been great. I just hope the fully-virtualized IP stack within jails made it into 7.1, as well as a slightly more stable ZFS.

    • by TheRaven64 (641858) on Tuesday November 25, 2008 @11:56AM (#25887709) Journal

      debugging mode is enabled in the kernel (and maybe the userland, I'm not 100% sure here).

      Betas generally have a few malloc flags set, which make malloc() a fair bit slower, resulting in everything in the userland being slow. The point of a beta is to catch bugs before the shipping release, so everything is run in debug mode.

      I'm also looking forward to 7.1, although I'm sad that the per-vchan volume control patches appear to only be in the 8.x tree. These implement the OSS 4.x ioctls and allow applications to just open /dev/dsp and write sound there, with simple ioctls to set the volume. 7.0 supports the ioctls, but they set the master volume, not the virtual channel's volume.

    • by cstdenis (1118589)

      I just hope the fully-virtualized IP stack within jails made it into 7.1, as well as a slightly more stable ZFS.

      No virtual IP stack yet. ZFS is slightly more stable, but still has a ways to go.

  • Ubuntu performance (Score:4, Interesting)

    by MikeRT (947531) on Tuesday November 25, 2008 @11:28AM (#25887303) Homepage

    The reason that I never really seriously used Linux on my PC laptop was that Ubuntu was sluggish, even with the newest ATI drivers, compared to Windows. Maybe people have good experience with nVidia drivers there, but Windows is a lot more usable as a desktop for me on the performance side of things. Granted, my main computer is a MacBook Pro running Leopard, but I can't imagine putting Linux back on my old PC laptop for when I need to use it.

    • Re: (Score:3, Insightful)

      by Hatta (162192)

      These days a full install of Ubuntu, with all it's bells and whistles is going to be slower than a bare install of XP. The nice thing is that you don't need all that cruft, and it's pretty easy to install a command line system and add things as needed. Use a lightweight WM or desktop like fluxbox or XFCE and you're set.

    • The reason that I never really seriously used Linux on my PC laptop was that Ubuntu was sluggish ...

      Why don't we try Xandros Linux? (It's the one installed in some Eee PCs.)

      * ducks *

    • That's what Xubuntu is for. Xubuntu uses Xfce, which makes it work well on older computers. I have a system dual-booting Xubuntu and Windows XP, and the performance improvement is very noticeable (although it largely has to do with the crappy AV running on XP).
    • by jgtg32a (1173373)
      Oh I see you mentioned Mac, because most of your post says that you prefer Windows to Linux, but you weren't modded troll.
      • It's pretty sad (Score:2, Offtopic)

        by MikeRT (947531)

        That someone would actually be modded down because they happen to think that Linux isn't all that as a desktop. That sort of thing is why I think that any moderation system that allows users to mod down rather than only up is broken.

  • Could be nice to do those comparisions in the same hardware betweeen i.e. Ubuntu, Gentoo and OpenSUSE, all for 64 bits, as is not clear when they are measuring against Linux or against optimizations or not that do a particular distribution. Or put where it applies (i.e. the java tests) the numbers for Windows and MacOS.
  • by tlhIngan (30335) <slashdot AT worf DOT net> on Tuesday November 25, 2008 @11:39AM (#25887449)

    Interesting results, and great if you're planning a server, but what about desktop use?

    How well does each OS do when doing something like playing back audio/video, and handling background processing loads? What about performance and system response as the load climbs up? (load averages of 5/10/20 ?).

    Only because I've seen Linux systems start to crumble around 5 (uniproc machine), and easily get unusuable, but have heard reports of BSD machines being able to still play MP3s without skipping/suttering even around 20 or so...

    (And yes, I'll allow tweaking system priorities - it only gets you so far, and impacts the other background processing tasks, to which we'll also be interested in how long they take to run. So renicing the media player to -20 works, but not if it makes all the other tasks take 10x as long to finish...).

    • Let's bring BeOS in on that test :)

      You could IM and browse the web on that OS without making your MP3s skip, all on a Pentium 133 w/ 64MB ram. None of its contemporaries were even close in terms of UI responsiveness under load and smooth media playback, and no bloated modern OS is even close. Damn impressive.

    • by Hatta (162192)

      How do you measure interactive performance? When you're just crunching numbers, or doing sustained reads from a disk it's pretty easy to get meaningful numbers. But when you have 5 or 6 different things going on at the same time, it's hard to replicate those conditions exactly on different systems. And if you do, what sort of metric are you going to use that will look good on a graph and actually tell you something about how well the system is performing?

    • by jyro1980 (978241)

      Interesting results, and great if you're planning a server, but what about desktop use?

      Totally agree!

  • Sorry, I got a bit carried away there. Eh, what was the article about again?

  • This couldn't come at a better time. I was recently wondering if FreeBSD was a good platform for deploying our first Java EE application (since we use fbsd for everything else) or that Linux or Solaris might be better. It's good to see that FreeBSD isn't all that bad, but I know now that switching to (Open)Solaris might be worth it. But as far as I see, OpenSolaris is mainly geared towards desktop use, isn't it?
    • OpenSolaris is no more "mainly geared" towards desktops than your typical Linux distro.

      From what you just said though... stick with what you've got, because I'm afraid you'd be switching OS's for reasons you really don't understand. Nothing you've read here on Slashdot today is worth basing an OS/platform decision on, there are many, many other things to consider.

  • by Anonymous Coward on Tuesday November 25, 2008 @12:13PM (#25887897)

    I have not played with Open Solaris but with normal Solaris you need to set parameters in the /etc/system file to get good performance. By default Solaris is set very conservative. In many tests I have run Solaris may not be the fastest with single test but under a heavy load with many applications running my experience has been it can handle a much bigger load then Linux on the same hardware. I use both but for backend heavy loaded servers I would choose Solaris.

    • I have not played with Open solaris but ubuntu was running on ext2 with linux if you want performance you need to pick an alternative file system (i find reiserfs to be good but support for it seams dwindling and answers such as use a supported fs (despite reiserfs being supported) are not uncommon), anyway my point is that these tests are pointless!

    • by Xtifr (1323) on Tuesday November 25, 2008 @02:16PM (#25889771) Homepage

      All three come with tunable performance parameters. All three can have their performance boosted even further by recompiling everything optimized for the particular hardware being used, possibly using specialized compilers (e.g. from Sun or Intel). But that's not the point, IMO. This isn't (or shouldn't be) a pissing match--this should be an opportunity to improve all three systems by seeing where their strengths and weaknesses are, and working to bolster their weaknesses and improve their strengths.

      In my experience, these sorts of tests on free/libre/open-source systems quickly become out-of-date because the developers take them as a challenge, and that's a good thing for everyone! :)

      Ff your tests were more than a couple of years ago, they're probably so out-of-date as to be utterly meaningless, but that's a separate issue. Personally, I'm a big fan of all three systems and want to see all three thrive and grow and improve. This kind of testing can only help with that, once you get past all the dick-waving by narrow-minded advocates.

    • by kindbud (90044)

      I have not played with Open Solaris but with normal Solaris you need to set parameters in the /etc/system file to get good performance.

      Solaris 10 deprecates almost every setting you are used to putting in /etc/system. Most of the no longer have any effect. Instead, you create resource controls in the global zone using the zone management tools (or editing the files by hand). The new method allows you to adjust any tunable without having to reboot the kernel.

  • Does anyone know if the Phoronix Test Suite will work under OpenBSD and NetBSD too? Says on the website: "Runs On Linux, OpenSolaris, Mac OS X, & FreeBSD Operating Systems"

    I'm curious as to how the other BSDs would perform.

  • I am curious to how it actually performs and not just what most the slashdotters say.

    It may actually suck and I am curious as to how much.

    I am contemplating leaving vista where I do php, apache, and java development. I wonder if there is an advantage at all.

  • Mostly pointless (Score:3, Insightful)

    by klapaucjusz (1167407) on Tuesday November 25, 2008 @12:35PM (#25888265) Homepage

    Except for Bonnie++, all of their benchmarks are compute-bound. In other words, they're benchmarking the bundled compiler, not the distribution.

    The one exception is Bonnie++, on page 6, which measures raw filesystem performance... and is something that is known to greatly depend on how old and how full a given filesystem is.

    • Not to mention which filesystem you use. With Solaris, you have a choice between UFS and ZFS, with very different performance characteristics. With FreeBSD, you have ZFS or UFS with a load of options (GEOM-level journalling, soft updates, and so on). With Linux you have four or five choices of filesystem which make sense for desktop use. Each of these has advantages and disadvantages - some have more features, some have better performance at random writes, and so on. With Linux, as I recall, you get a
    • by Khashishi (775369)
      That's the problem with all operating system benchmarks. You are essentially benchmarking the computer, not the OS. What really matters for an OS is how usable and efficient it is for getting things done. This means, of course, that the kernel is mostly irrelevant. The shell is what will influence how efficient a user can use it.
    • by stor (146442)

      > The one exception is Bonnie++, on page 6, which measures raw filesystem performance... and is something that is known to greatly depend on how old and how full a given filesystem is.

      The type of I/O scheduler you use, combined with the type of RAID controller can make a significant difference to bonnie++ results/ IO performance too. For certain RAID controllers and workloads, using the NULL or deadline scheduler can increase performance significantly over CFQ or AS.

      -Stor

  • If you take a look at the OS kernel models and would form base predictions from the kernel architecture alone it would closely mimic the results of the tests.

    There is a reason kernel architecture is a highly engineered science, and why even old models of inherent pluses or negatives would still manifest even in today's latest incarnations of these kernel architecture models.

    If you look at Linux, with its microkernel heritage, it is going to offer better low level kernel multitasking and kernel messaging. Al

    • Re: (Score:3, Insightful)

      by styrotech (136124)

      If you look at Linux, with its microkernel heritage...
      ....Also if you look at a classic monolithic kernel design (even with Apple duct tape) the OS X kernel....
      ...even Apple has done well with putting bandaids on the monolithic nature of BSD/MACH...

      Linux is a microkernel? Mach is monolithic? Since when?

      ...In fact, every kernel architecture compared in these tests and OS X where deemed to be too primative for even the MS NT team back in 1990...
      ...and architecture that MS chose to use and abandon the 'in u

      • by TheNetAvenger (624455) on Tuesday November 25, 2008 @11:57PM (#25895975)

        Linux is a microkernel? Mach is monolithic? Since when?

        Should read, "Linux with its non-microkernel heritage"

        The point was that Linux has no traditional microkernel alignment in contrast with OS X that keeps the traditions of a microkernel that when paired with a monolithic BSD interface kills a lot of the concept of what the MACH kernel was intended to do.

        Originally MACH was a microkernel concept, but in its current incarnations, like OpenBSD, OS X, etc, it is no longer a microkernel by any set of definitions other than being another abstraction layer for the upper level kernel API sets.

        MACH when paired with BSD, a monolithic kernel API you lose a lot of the direct hardware one request concept of a microkernel, especially on today's architectures.

        Linux was true to itself in that it never attempted to abstract hardware and instead set its own rules for what was expected of the hardware, and when running on hardware that cannot meet the needs, the functionality that Linux requires must be simulated on that hardware.

        So you have Linux that will outperform OS X because of its all in one nature that doesn't have to cross call API layers for kernel processes. On the other hand you have a BSD/MACH concepts like OS X that can do well for hard crunching simple tasks that funnel all the way to the MACH kernel, but when it gets to handling multiple requests, process communication gets sticky and multi-tasking can kill the once low level elegant level of performance offered.

        NT has neither of these pitfalls. It has a very fast process creation system, a low level HAL, and multi-layered kernel API sets. Not only do you get the near speed of a microkernel, but you also get the robust API sets that STILL reside in true kernel layers.

        On Windows this is taken to such an extreme that even Win32, which is an OS subsystem running on NT, has its own kernel32, that is technically a 'kernel' level API, yet sits all the way up in an agnostic subsystem.

        There is a reason the kernel designer of MACH let it go and moved on to Microsoft and has put their knowledge and work behind NT, because they believe in the architecture, even over their own creation.

        As for NT being a copy or rip off of VMS, there is some truth that the knowledge from the VMS team didn't forget what they learned when they went to Micrsoft, but also remember, they were wanting to replace VMS when at DEC even and much of their concepts where thrown by corporate politics, preventing any massive innovation to the platform that they seriously wanted to explore. This is what moved so many to go to MS so they could make the next generation OS.

        NT wasn't just a overnight bastard creation, it was the best and brightest from MS and VMS and even the UNIX developers of the time...

        Cutler is brilliant, but in today's kernel world, even he admits he is getting dated. (Even Windows 7 moved in a few new people to optimize in different directions reworking old standard Cutler level code in the kernel.)

        So if we can say, what Culter's team did in the 1990s was more revolutionary than evolutionary, as NT really doesn't conform to VMS concepts, especially theoretical kernel concepts, then why can't we ask the OSS world today to revisit kernel architecture on a larger scale?

        Instead I see articles flying around about BSD vs Linux and Linus writing about why monolithic kernel designs will alwasy be better and other experts debating that moving back to an more inclusive microkernel with modern hardware in mind would be better.

        Where are the movers in the OSS word that are outside this box and why isn't the actively working on even a basic hybrid kernel technology of its own that with what kernel engineers know today will leapfrog kernel design?

        Instead the big work you see on actual new kernel concepts are yet again coming from places like Microsoft Research where they are playing with singularity and other kernel concepts that range from managed code kernel designs to even frankenstei

        • by styrotech (136124)

          Just another correction:
          OpenBSD has nothing to do with Mach or microkernels in general (were you thinking of DragonFly? That has a hybrid kernel). The OpenBSD kernel is about as monolithic as they come these days (it doesn't even really have modules), and descended from NetBSD which descended from the original BSD source code after the famous lawsuit was settled.

          You will never see revolutionary kernel designs come from established open source projects. They have enough on their plate as it is. Open source i

          • Just another correction:

            Everything is not just Wiki definitions, even OpenBSD is more complex in 'trying' to define it in general terms as we are trying to keep this dicussion. If you want to get technical, we can write out some information and you can use it to append the Wiki pages properly.

            Monolithic and microkernel are very vague general terms used to describe two aspects of current kernel design, and even the use of 'hybrid' kernel if you read from Wiki is very bad, as it will even call OS X a hybrid k

  • Is not exactly what I'd say "average nix user hardware".
    I bet that less than 5% of NIX users have that kind of machine on their desktops.
    Why not running the test on either a "normal" desktop or even better on a laptop?
    And, by the way, Opensolaris 2008.11 is still a release candidate!
  • I installed OpenSolaris on my laptop about one week ago.

    Laptop: 1.7mhz celeron, 512mb ram.
    Previous OS: Xubuntu.
    IMO: OpenSolaris loses by a mile.

    OpenSolaris installation was easy, but very slow. It hangs at 99% for hours. This is a well know bug. OpenSolaris was noticeably slower than xubuntu. Bootup and shutdown are especially slow. The worst problem is applications, I can not find a chm reader, or mp3 player that works - on xubuntu (or any linux) this is a cinch. The applications that come with OpenSolaris

  • I don't fucking get these benchmarks lately, I mean how credible is this crock of shit? You're benchmarking one STABLE system (Ubuntu) against an almost stable system (OpenSolaris) and against a Beta release (FreeBSD)... WOW that sure seems like a bad benchmark to me! In a typical benchmark, FreeBSD would've simply owned Ubuntu, and OpenSolaris for that matter! So, don't pay attention to these bullshit benchmarks, because they're worth shit, in my opinion. Puh-lease, bitch.

There are never any bugs you haven't found yet.

Working...