Benchmarks For Ubuntu vs. OpenSolaris vs. FreeBSD 131
Ashmash writes "After their Mac OS X versus Ubuntu benchmarks earlier this month, Phoronix.com has now carried out a performance comparison between Ubuntu 8.10, OpenSolaris 2008.11 and FreeBSD 7.1. They used a dual quad-core workstation with the Phoronix Test Suite to run primarily Java, disk, and computational benchmarks. The 64-bit build of Ubuntu 8.10 was the fastest overall, but FreeBSD and OpenSolaris were first in other areas."
Comment removed (Score:4, Interesting)
What about some combined loads? (Score:5, Interesting)
Interesting results, and great if you're planning a server, but what about desktop use?
How well does each OS do when doing something like playing back audio/video, and handling background processing loads? What about performance and system response as the load climbs up? (load averages of 5/10/20 ?).
Only because I've seen Linux systems start to crumble around 5 (uniproc machine), and easily get unusuable, but have heard reports of BSD machines being able to still play MP3s without skipping/suttering even around 20 or so...
(And yes, I'll allow tweaking system priorities - it only gets you so far, and impacts the other background processing tasks, to which we'll also be interested in how long they take to run. So renicing the media player to -20 works, but not if it makes all the other tasks take 10x as long to finish...).
Re:Right. (Score:3, Interesting)
Re:Kernel Architecture would offer same prediction (Score:4, Interesting)
Linux is a microkernel? Mach is monolithic? Since when?
Should read, "Linux with its non-microkernel heritage"
The point was that Linux has no traditional microkernel alignment in contrast with OS X that keeps the traditions of a microkernel that when paired with a monolithic BSD interface kills a lot of the concept of what the MACH kernel was intended to do.
Originally MACH was a microkernel concept, but in its current incarnations, like OpenBSD, OS X, etc, it is no longer a microkernel by any set of definitions other than being another abstraction layer for the upper level kernel API sets.
MACH when paired with BSD, a monolithic kernel API you lose a lot of the direct hardware one request concept of a microkernel, especially on today's architectures.
Linux was true to itself in that it never attempted to abstract hardware and instead set its own rules for what was expected of the hardware, and when running on hardware that cannot meet the needs, the functionality that Linux requires must be simulated on that hardware.
So you have Linux that will outperform OS X because of its all in one nature that doesn't have to cross call API layers for kernel processes. On the other hand you have a BSD/MACH concepts like OS X that can do well for hard crunching simple tasks that funnel all the way to the MACH kernel, but when it gets to handling multiple requests, process communication gets sticky and multi-tasking can kill the once low level elegant level of performance offered.
NT has neither of these pitfalls. It has a very fast process creation system, a low level HAL, and multi-layered kernel API sets. Not only do you get the near speed of a microkernel, but you also get the robust API sets that STILL reside in true kernel layers.
On Windows this is taken to such an extreme that even Win32, which is an OS subsystem running on NT, has its own kernel32, that is technically a 'kernel' level API, yet sits all the way up in an agnostic subsystem.
There is a reason the kernel designer of MACH let it go and moved on to Microsoft and has put their knowledge and work behind NT, because they believe in the architecture, even over their own creation.
As for NT being a copy or rip off of VMS, there is some truth that the knowledge from the VMS team didn't forget what they learned when they went to Micrsoft, but also remember, they were wanting to replace VMS when at DEC even and much of their concepts where thrown by corporate politics, preventing any massive innovation to the platform that they seriously wanted to explore. This is what moved so many to go to MS so they could make the next generation OS.
NT wasn't just a overnight bastard creation, it was the best and brightest from MS and VMS and even the UNIX developers of the time...
Cutler is brilliant, but in today's kernel world, even he admits he is getting dated. (Even Windows 7 moved in a few new people to optimize in different directions reworking old standard Cutler level code in the kernel.)
So if we can say, what Culter's team did in the 1990s was more revolutionary than evolutionary, as NT really doesn't conform to VMS concepts, especially theoretical kernel concepts, then why can't we ask the OSS world today to revisit kernel architecture on a larger scale?
Instead I see articles flying around about BSD vs Linux and Linus writing about why monolithic kernel designs will alwasy be better and other experts debating that moving back to an more inclusive microkernel with modern hardware in mind would be better.
Where are the movers in the OSS word that are outside this box and why isn't the actively working on even a basic hybrid kernel technology of its own that with what kernel engineers know today will leapfrog kernel design?
Instead the big work you see on actual new kernel concepts are yet again coming from places like Microsoft Research where they are playing with singularity and other kernel concepts that range from managed code kernel designs to even frankenstei