Microsoft Reports OSS Unix Beats Windows XP 442
Mortimer.CA writes "In a weblog entry, Paul Murphy mentions a Microsoft report (40 page PDF) that in many instances FreeBSD 5.3 and Linux perform better than Windows XP SP2. The report is about MS' Singularity kernel (which does perform better than the OSS kernels by many of the metrics they use), and some future directions in OS design (as well as examination of the way things have been done in the past)." From the post: "What's noteworthy about it is that Microsoft compared Singularity to FreeBSD and Linux as well as Windows/XP - and almost every result shows Windows losing to the two Unix variants. For example, they show the number of CPU cycles needed to "create and start a process" as 1,032,000 for FreeBSD, 719,000 for Linux, and 5,376,000 for Windows/XP."
44 pages and the main question is still unanswered (Score:4, Insightful)
Here's an interesting snippet I found while perusing the PDF...thought I'd share. Interesting...Singularity is ostensibly supposed to be about stability, but the 44-page paper has no data on this. Kinda like saying, "Our new bulletproof vest is 40% lighter than our leading competitors, and twice as flexible. How well does it stop bullets, you ask? Sorry...we do not yet have results for that benchmark.".
Wake me when a paper comes out about Microsoft's new stability-oriented OS that actually addresses that particular aspect of the product.
Re:44 pages and the main question is still unanswe (Score:5, Insightful)
Re:44 pages and the main question is still unanswe (Score:3, Insightful)
My job is in QA. Your statement says that my job is impossible. Here are a few ways you can test stability:
1) See if the OS comes back online after a power cycle
2) Insert and remove device drivers
3) Send mangled data across the various data busses
4) Run programs that try to allocate all the memory
5) Run programs that try to hog all the CPU
6) Run a program that fills the hardisk/erases the hardisk/refills the hardisk
7) Do all the above all at the same time
The OS is
Re:44 pages and the main question is still unanswe (Score:4, Informative)
What you're testing is simple stuff, stuff that's easy to identify. There's a whole other class of reliability testing that's far more long term.
Re:44 pages and the main question is still unanswe (Score:2)
Re:44 pages and the main question is still unanswe (Score:2)
Re:44 pages and the main question is still unanswe (Score:5, Informative)
Re:44 pages and the main question is still unanswe (Score:2)
really... how exactly do you replace a running libc?
Re:44 pages and the main question is still unanswe (Score:4, Informative)
Typical distros that support pervasive no-reboot updating (like Debian) don't exactly replace a "running" libc (or any other library), they simply update the on-disk copy. So any programs run after that will get the new libc, but any programs that were started before the update will of course be using the old libc.
Usually this works very well; I suppose for a mega serious security update you might want to restart all your daemons too or something.
Re:44 pages and the main question is still unanswe (Score:3, Informative)
init 3
No reboot required
Re:44 pages and the main question is still unanswe (Score:4, Insightful)
Re:44 pages and the main question is still unanswe (Score:5, Informative)
For example, apache and sshd, and various FTPds, can be restarted without anyone possibly noticing, because they simply leave any running children open. You connected before a certain time, you got the old copy, you connected after it, you got the new one.
And, of course, many protocols work fine if you go away for five seconds, like SMB. The client program will just say 'oops, connection hiccup' and reconnect silently, and the end user never notices. Same with IMAP clients. They go 'Hey, the server closed my connection, I better open it again'.
Restarting services on a Linux box is 99% transparent to end users, even ones that are currently directly doing something with the server.
Rebooting is not transparent, even if all the connections are reaqquired automatically, simply because work stopped for the two minute reboot.
Re:44 pages and the main question is still unanswe (Score:5, Informative)
mmm... I can see that in a few specific cases, like if you have a lot of users who log on over ssh. Less so for webservers and remote filesystems where you bounce the runlevels fast enough, the interruption will probably never be noticed.
Of course, the context where the Curse Of A Thousand Reboots really bites is for the home computer. I mean, I only have one user on this machine. Rarely I'll have two, never any more than that. So if I cycle runlevels, no-one is going to be put out bar me - and I'm the one doing it.
In General, I find that the people inconvienced by a compulsory reboot are not networked users.
Of course, even if you have remote users, your downtime is going to be a lot less if you don't have to go through POST, bios initialisation, device scanning and all the rest of it. And of course you only have to do it once, becaue you're controlling the process, so you don't get fifteen reboots in a row because windows brute forces everything.
So, I think "all but name" is overstating the case. By rather a lot, actually.
Re:44 pages and the main question is still unanswe (Score:2)
But that impacts "stability". (Score:2)
If your system is not "up" long enough to trigger a bug, is it "stable"?
If "yes", then at what point does a system become "unstable"? If I have to reboot every year? Month? Week? Day? Hour? Minute?
Or is "stable" defined in terms of "unexpected crash" and discounts any "crash" that is avoided by rebooting the system?
This is one of the reasons I like Linux. Because the system doesn't require reboots except to replace the kernel, it is easier t
Re:44 pages and the main question is still unanswe (Score:2)
"Ev'ry OS has oopdates, but not ev'ry OS reeequires a reboot!"
Seriously though, very few Linux updates, for example, require a reboot. Most updates occur in user space and can be adequately applied by restarting the applicable services (if any). You just have to be aware of exactly what is being updated and what it affects.
In the (non-Windows) server world, rebooting is a big no-no.
-matthew
Re:44 pages and the main question is still unanswe (Score:5, Funny)
Amazing.
not caffeine... (Score:5, Funny)
How do you bow without a brain to weigh your head? (Score:2)
Re:44 pages and the main question is still unanswe (Score:5, Interesting)
But singularity isn't all new, it just implements old ideas: Occam and QNX!
But in my opinion, Singularity just might be the most interessting os to emerge in the last years. It will be interesting to see how long it will take the free software world to come up with something similar
Re:44 pages and the main question is still unanswe (Score:2)
Re:44 pages and the main question is still unanswe (Score:2)
And in reality it just makes things like disk I/O extremely slow, ala OS X. Personally, I am pretty disappointed by OS X as a server. Both in stability and speed. If that is a good example of what a microkernel can do in the real world...
-matthew
dependance or dependability? (Score:2)
sheesh.
Re:dependance or dependability? (Score:4, Interesting)
Did you actually read it? (Score:5, Insightful)
You didn't really read it, did you? From TFA(bstract).
The point of the paper is NOT to demonstrate a fully working uber-dependable system, but to validate the practicality of the architecture that is under development, and the new technologies being included. That's why they have the section on performance, with the preface (right above your quote, btw):
That's the point of the paper. I understand, however, that you might have been in too much of a rush to get first post that you didn't understand the point of the paper...
Too Telling (Score:5, Funny)
Re:Too Telling (Score:4, Interesting)
Re:Too Telling (Score:3, Interesting)
A great product is a great product regardless of who makes it. I thought OSS was a big deal because it emphasised great engineering with openness. So, if you can't handle the heat, then stay out of the kitchen.
Re:Too Telling (Score:2, Insightful)
I'm happy though that MS may be taking Singularity seriously. Maybe we will see their OS in 2011-2015 based on it? Unless some sort of major shift in its purpose occurs, then I would definitely jump ship from whatever I am on then, to that and I will definitely port/develop my software for the OS.
Re:Too Telling (Score:2)
Re:Too Telling (Score:5, Interesting)
I don't know what those 5m vs 1m cycles are doing. But what I do know that fundamentally Windows was designed with high-performance threading/wait operations and high-performance asynchronous operations, whereas Unix and its derivates rely on high performance process-creation, blocking I/O for sever applications.
I.e. Apache 1.3x series performs poorly on windows because it was a straight copy of the Unix edition - using processes rather than threads.
Re:Too Telling (Score:4, Informative)
It just doesn't beat it by as the absurd amount as its process creation beats MS's process creation.
Think of it this way:
Linux threads: great
Linux processes: great
Windows threads: good
Windows processes: horrible
Re:Too Telling (Score:5, Informative)
"One test mentioned in Ulrich's email - running 100,000 concurrent threads on an IA-32 - generated some interesting discussion. Ingo Molnar explained that with the current stock 2.5 kernel such a test requires roughly 1GB RAM, and the act of starting and stopping all 100,000 threads in parallel takes only 2 seconds. In comparison, with the 2.5.31 kernel (prior to Ingo's recent threading work), such a test would have taken around 15 minutes."
http://kerneltrap.org/node/422 [kerneltrap.org]
As you can see, the stellar increase in thread performance has been unbelievable. Keep in mind, prior to this effort, Linux's thread creation was no where near the performance delta gained from these projects. Ergo, one can easily deduce that Linux far exceeds (less time) Win's thread creation latencies.
every Win32 process gets GUI crap at start-up (Score:3, Insightful)
UNIX creates a process with fork, which takes no arguments. UNIX runs a new executable with execve, which takes 3 arguments. So in just two system calls with 3 arguments, you launch an app.
Windows has a CreateProcess() [microsoft.com] function with 10 arguments, many of which are pointers to structs. I call your attention to the absurd "LPSTARTUPINFO lpStartupInfo " argument, which supplies info about the windows style and current desktop.
Re:Too Telling (Score:3, Insightful)
Win32 vs Posix processes (Score:5, Informative)
Some of the things that Win32 processes do that SFU and native processes don't:
5 Steps of Grieving (Score:5, Funny)
Re:5 Steps of Grieving (Score:2)
And all those times billg or steveb visited some coutry would be called what? Extortion?
Singularity is truly an intriguing system. (Score:5, Insightful)
In twenty or so years we may look back at Microsoft Research with the same admiration we have for Bell Labs.
Re:Singularity is truly an intriguing system. (Score:5, Funny)
I just shot soda out of my nose. You owe me a keyboard.
Re:Singularity is truly an intriguing system. (Score:5, Interesting)
Re:Singularity is truly an intriguing system. (Score:5, Interesting)
Okay, but how many of their innovations (Christ Microsoft loves that word!) actually make it to the outside world?
I think your comparison to Bell Labs is good, however, in that much of what Bell Labs created required others to make into real products. AT&T/Ma Bell sat on every innovation until it nearly suffocated due to lack of capital investment.
Re:Singularity is truly an intriguing system. (Score:2)
Re:Singularity is truly an intriguing system. (Score:2)
Re:Singularity is truly an intriguing system. (Score:2)
Re:Singularity is truly an intriguing system. (Score:5, Insightful)
One other big problem from MSR - on the occasional project that's actually good, they somehow manage to kill it, or at least never tech transfer it into products. I cry when I think of some of the awesome dev technologies MSR was working on a few years ago that never made it out.
Where is your fantastic research? (Score:2)
Re:Where is your fantastic research? (Score:3, Informative)
Re:Singularity is truly an intriguing system. (Score:2)
No, but if you work for Microsoft Research it is likely that the results of your research may never see the light of day as products. Unless there is a way for Microsoft to make hoards of cash from your idea, it will be stillborn.
Re:Singularity is truly an intriguing system. (Score:2)
They will probably patent all of the stuff they can.
Hence, most of the research will see the light of day.
You missed my qualifier: "as products". In that context I stand by my statement.
Of course, if you believe that patenting is a bad idea and equivalent to withholding research, you are right.
I don't consider patenting a bad idea, per se, but I do believe defensive and method patents are stupid and should be removed from US patent code.
And no, I do
Re:Singularity is truly an intriguing system. (Score:5, Insightful)
Re:Singularity is truly an intriguing system. (Score:2)
Well, they caught up with our hatred for Ma' Bell in NO TIME. That ought to say something
Re:Singularity is truly an intriguing system. (Score:4, Insightful)
Re:Singularity is truly an intriguing system. (Score:2)
Re:Singularity is truly an intriguing system. (Score:2)
That explains a lot (Score:2)
Re:That explains a lot (Score:2)
I always found it amusing that Windows and Mac users used to think that multitasking meant just having Word and IE open at the same time.
Re:That explains a lot (Score:2)
Even the Blog author makes the same comment: "So why is this interesting? Because their test methods reflect Windows internals, not Unix kernel design." yet he will still draws out
Re:That explains a lot (Score:5, Insightful)
As a result, you get tons of unstable Windows applications because to get any reasonable concurrency you have to throw out the years of hard work that OS designers put into having protected memory.
Threads vs. processes isn't "two different ways of doing the same thing". Barring a massive implementation boondoggle, you make that choice based on whether you want memory protection or not. These numbers highlight a massive boondoggle, which takes the correct choice away from the application author in many cases.
Process concurrency is hardly the panacea (Score:3, Interesting)
Throw out years of hard work? Give me a break! It almost seems that you are blaming the poor quality of modern threaded applications on Windows! That's rich!
Concurrency is difficult to use correctly no matter what technology you use. Inter-process shared/mapped memory is just as susceptible to race condi
Re:But how...? (Score:3, Insightful)
Having done a fair amount of GUI programming myself, I find a multiprocess solution is often correct (e.g. in something like Photoshop image filters, where you want shared access to one memory segment but do
Microsoft Research is not Microsoft. (Score:4, Insightful)
You're SO fired! (Score:5, Funny)
Give me a fucking break (Score:5, Insightful)
For one thing, Windows is not slower than Unix in most of the tests. It's slower than Unix in some of the tests and faster in others. For another, these benchmark results are for low-level things like spawning processes and threads. Any programmer who knows anything about Unix and Windows will tell you that threads are cheaper in Windows and processes are cheaper in Unix, because that's how they were designed. So of course Windows is going to be slower than Unix at creating processes, and of course Unix is going to be slower than Windows at creating threads.
The only thing worth reporting about this thing is the performance of Singularity, which looks like it's shaping up to be an excellent modern kernel.
Re:Give me a fucking break (Score:3, Informative)
(I'm not being sarcastic: I haven't yet had time to read the full report, and would genuinely like to know.)
Re:Give me a fucking break (Score:5, Insightful)
Depends (Score:3, Insightful)
Re:Give me a fucking break (Score:2)
I agree that the report is meaningless for the purposes suggested in this slashdot write-up. If anything, it tells us that something coming out from MS Research has the potential to kick the asses of both Windows and Linux.
Memory Usage? (Score:3, Funny)
The future: Longhorn will suck far more memory than XP.
They must be in cahoots with the memory makers, alert Rambus!
TFA (Score:2)
"So why is this interesting? Because their test methods reflect Windows internals, not Unix kernel design. There are better, faster, ways of doing these things in Unix, but these guys - among the best and brightest programmers working at Microsoft- either didn't know or didn't care."
So, Windows still loses at times when using what seems to be a biased (or simply uninformed) testing method? Loelz.
Processes v. threads (Score:3, Insightful)
No I didn't RTFA.
Re:Processes v. threads (Score:5, Insightful)
Exactly. NT got it's process model from VMS, and process creation was a very heavyweight operation. Unix, by contrast, had a very lightweight process creation operation. Hence NT needed threads to provide a faster alternative to processes, while Unix (whose processes were almost as cheap to create as NT threads) didn't really need threads for anything other than a marketing checklist (about the only thing Unix threads get you that processes don't is fully-shared address space, and I'd argue that's often more a problem than an advantage).
Re:Processes v. threads (Score:3, Interesting)
You're misreading. It's not 90% of the problems out there, it's 90% of the code in a given program that's synchronous.
I really doubt that.
Take, for example, the process of reading data from a single input source and processing it. With no other input sources to look at, and no processing that doesn't require the data you're trying to read, exactly what can the code do while the read's completing?
That's not a typical modern server or end user application.
You incur the overhead of creating a thread, and then
Strangely enough, 5,376,000 (Score:3, Funny)
Wohoo! (Score:3, Funny)
Come on, who cares about statistics? I'm glad they're actually doing something useful: CS research!
Oh wait, this is
Re:Wohoo! (Score:2)
No, the equivalent of 'R&D' on the microsoft campus is R&P, or 'Research and Patent'. Like most of their would-be innovations, they are born into formaldehyde, destined to serve as court room exhibits; the last thing microsoft can afford, is a lively, competitive software industry spurred by brilliant implementations and ideas.
To satisfy your inevitable curiosity, peruse their fine patent collection [uspto.gov]. Plenty of 'research' and 'innovatio
Typical (Score:5, Interesting)
This is pretty typical. Microsoft's biggest competitor is their old software, so their new offerings have to look good against it.
Remember Windows 95's marketing? "32-bit memory protection makes it uncrashable!" Remember Windows 98's marketing? "Even more stable than 95!" Remember Windows 2000's marketing? "Based on an NT core, it's more stable than the crash-prone Windows 9x!"
Its revisionist history. The only way to get a somewhat accurate picture is if you compare their current claims with what they've said about new technology in the past.
Article misses the point (Score:4, Informative)
This article takes a very interesting report on a reference implementation of some innovative ideas in OS design and reduces it to a couple of entirely peripheral, seat-of-the-pants benchmarks that support the "OSS rulez!" thesis.
Even people like me, who have only a basic knowledge of OS architecture, can tell you that processes are lightweight in Unix and heavyweight in Windows. The lightweight objects in Windows are threads, which is why Windows makes so much use of threads, while Unix spawns processes left and right.
Typical slashdot post exaggerations (Score:5, Insightful)
This isn't Microsoft (Score:5, Interesting)
supporting quote (Score:5, Interesting)
Waking up? (Score:3, Interesting)
Ballmer's right, it is all about developers. OSS developers can introduce OSS values into the Windows "ecosystem" for lack of a better word and see what happens.
Win/XP, MacOS/X, WhatThe/Heck? (Score:5, Funny)
Entirely OT, I know, but...
Why is it that some people seem to think that all OS names, when they have a qualifier of some kind attached to the generic term, need a slash to separate them? Just because GNU/Linux is written that way does not mean it's some kind of law, people...
It's Windows XP. That's WINDOWS {SPACE} XP. And Mac OS X. Spaces. No slashes.
...
I don't know why I even bother...
Dan Aris
Wow is this ever misleading (Score:5, Insightful)
Here's the table from the paper, ranked best-worst, W=windows, F=freebsd, L=linux, S=singularity:
Read Cycle Counter: W: 2, F: 6, L: 6, W: 2, S: 8
ABI Call: S:87, L:437, W:627, F:878
Thread Yield: S:394, W:753, L:906, F: 911
2-Thread ping-pong: S:1207, W:1658, L: 4041, F: 4707
2-Message ping-pong: S:1452, L: 5797, W: 6244, F: 13304
Process Creation: S: 300000, L: 719000, F: 1032000, W: 5376000
The only stat in this table that Windows trails on is process creation. And anybody who has ever ported Unix code to Win32 knows exactly why: Windows is thread-oriented, and Windows systems don't tend to use helper programs or demand-forking to get work done. Which might be why Windows beats Unix in the thread benchmarks, but not in the IPC benchmarks. On the more general benchmarks, like cycles to issue a system call, Windows falls smack in the middle --- and, again, Windows has a slightly different take on what is and isn't a system call.
Drawing comparisons between Singularity and normal operating systems here is silly. Singularity doesn't have processes in the conventional sense; since there's no hardware dependencies on "process" creation in Singularity, IPC and forking are much faster.
Which is why this benchmark is reasonable inside the Singularity tech report (they're trying to demonstrate that there's a major performance benefit in rethinking boundaries between programs), but totally unreasonably outside that context: these are micro-benchmarks, like the ones CISC and RISC people throw at each other, and don't describe the amount of time it takes to complete a high-level task. Time to execute a system call is meaningful only in the context of how many system calls it takes to complete the task you're measuring.
Talk about a misleading summary (Score:2)
Oh well, this is slashdot, can't expect editors to, you know, edit anything or even bother to read what they post.
Umm... yeah, sure... but... (Score:3, Funny)
This is not true (Score:5, Informative)
According to the benchmarks published there
- at most OS jobs like threading/process creation, Singularity is at least twice as fast as linux, Linux is very fast at process creation, while XP is good at threads
- in File Operations FreeBSD and Linux beat XP and Singularity at random reads
- in File Operations XP beats Linux and Singularity at sequential reads, with the exception of FreeBSD being fastest if blocksize is high(and very bad for small blocksize)
- linux executable sizes are larger than these of the other OSes, (whatever that means, more good coding, or less bad code SCNR)
Please bear in mind that a benchmark does not tell whether the "slower" OS actually invested more time in doing some smart stuff that pays off in some other way. In particular, I would not be surprised if an experimental OS like Singularity did less.
partial repost from http://slashdot.org/comments.pl?sid=167223&cid=13
Windows is faster in Ubuntu (Score:3, Interesting)
Shame to have to set up like this just to run unreal editor, though. Oh, for you gamers out there, UT runs so much smoother and faster in Ubuntu, it's not funny. UT2k4 (has linux installer on the 1st cd) runs way better in Ubuntu also. You might want to check it out if you have a spare hard drive you can play around with.
Re:What's the point of CreateProcess benchmarks? (Score:5, Insightful)
Re:What's the point of CreateProcess benchmarks? (Score:3, Informative)
Re:What's the point of CreateProcess benchmarks? (Score:3, Informative)
There are 2 seperate issues here
1. Are threads faster than process? Yes, on both Unix and Windows.
2. Are process so slow as to be essentially unusable for concurrency? On Windows, yes for a relatively large problem domain.
(2) i
Cache (Score:2)
Re:What's the point of CreateProcess benchmarks? (Score:2)
Re:premature optimization (Score:2, Funny)
Is "searching the manpages" included in the benchmark time?
*Ducks*
Re:That's why it's called "development" (Score:2)
How much of that advantage is performance based and how much is due to monopoly control?
Re: (Score:2, Informative)
Re:Colonel Realtime (Score:2)
Re:Linux is NOT a Unix Variant (Score:3, Informative)
Re:5,376,000 cycles for Windows/XP (Score:3, Informative)
Re:OS in C# ??? (Score:3, Interesting)