Microsoft Reports OSS Unix Beats Windows XP 442
Mortimer.CA writes "In a weblog entry, Paul Murphy mentions a Microsoft report (40 page PDF) that in many instances FreeBSD 5.3 and Linux perform better than Windows XP SP2. The report is about MS' Singularity kernel (which does perform better than the OSS kernels by many of the metrics they use), and some future directions in OS design (as well as examination of the way things have been done in the past)." From the post: "What's noteworthy about it is that Microsoft compared Singularity to FreeBSD and Linux as well as Windows/XP - and almost every result shows Windows losing to the two Unix variants. For example, they show the number of CPU cycles needed to "create and start a process" as 1,032,000 for FreeBSD, 719,000 for Linux, and 5,376,000 for Windows/XP."
Re:Singularity is truly an intriguing system. (Score:5, Interesting)
Re:Singularity is truly an intriguing system. (Score:5, Interesting)
Okay, but how many of their innovations (Christ Microsoft loves that word!) actually make it to the outside world?
I think your comparison to Bell Labs is good, however, in that much of what Bell Labs created required others to make into real products. AT&T/Ma Bell sat on every innovation until it nearly suffocated due to lack of capital investment.
Typical (Score:5, Interesting)
This is pretty typical. Microsoft's biggest competitor is their old software, so their new offerings have to look good against it.
Remember Windows 95's marketing? "32-bit memory protection makes it uncrashable!" Remember Windows 98's marketing? "Even more stable than 95!" Remember Windows 2000's marketing? "Based on an NT core, it's more stable than the crash-prone Windows 9x!"
Its revisionist history. The only way to get a somewhat accurate picture is if you compare their current claims with what they've said about new technology in the past.
Re:Too Telling (Score:4, Interesting)
This isn't Microsoft (Score:5, Interesting)
Re:44 pages and the main question is still unanswe (Score:5, Interesting)
But singularity isn't all new, it just implements old ideas: Occam and QNX!
But in my opinion, Singularity just might be the most interessting os to emerge in the last years. It will be interesting to see how long it will take the free software world to come up with something similar
Waking up? (Score:3, Interesting)
Ballmer's right, it is all about developers. OSS developers can introduce OSS values into the Windows "ecosystem" for lack of a better word and see what happens.
Re:Too Telling (Score:5, Interesting)
I don't know what those 5m vs 1m cycles are doing. But what I do know that fundamentally Windows was designed with high-performance threading/wait operations and high-performance asynchronous operations, whereas Unix and its derivates rely on high performance process-creation, blocking I/O for sever applications.
I.e. Apache 1.3x series performs poorly on windows because it was a straight copy of the Unix edition - using processes rather than threads.
Re:dependance or dependability? (Score:4, Interesting)
supporting quote (Score:5, Interesting)
Re:Too Telling (Score:3, Interesting)
A great product is a great product regardless of who makes it. I thought OSS was a big deal because it emphasised great engineering with openness. So, if you can't handle the heat, then stay out of the kitchen.
Re:OS in C# ??? (Score:3, Interesting)
Modern CPUs quite be quite a bit faster if they didn't have to support C. Take a look sometime at all the die space an Athlon64 uses for stuff like TLB, etc. Also look how it needs to increase L1 cache latency by 50% (from 2 cycles to 3), just to support the TLB lookup. All of this stuff would be unnecessary if C programs couldn't overwrite whatever memory they wanted.
Process concurrency is hardly the panacea (Score:3, Interesting)
Throw out years of hard work? Give me a break! It almost seems that you are blaming the poor quality of modern threaded applications on Windows! That's rich!
Concurrency is difficult to use correctly no matter what technology you use. Inter-process shared/mapped memory is just as susceptible to race conditions as cross-thread shared memory, and inter-process synchronization logic can deadlock just as easily as thread synchronization logic. And the results are the same: once a process is deadlocked, or corrupts its data due to a race condition, what difference does it make if it's running in its own address space? The software has failed catastrophically either way!
We are ALL well aware that poorly written multi-threaded software is unreliable and that threads can easily trash other threads' data if not written correctly. And yet, for performance-critical applications, programmers still prefer to use threads. Why? It's simple: Because, for MANY applications, the benefits in performance outweigh the risks.
Finally, I'd like to point out one more thing. You claim that to get "reasonable concurrency" on Windows you are FORCED to use threads. I completely disagree. While process startup latency is relatively high on Windows, Windows offers a rich set of interprocess communication mechanisms, and context switching is quite fast. And if your program is so performance-critical that process startup latency is your biggest bottleneck, then switching to thread synchronization seems perfectly reasonable.
Windows is faster in Ubuntu (Score:3, Interesting)
Shame to have to set up like this just to run unreal editor, though. Oh, for you gamers out there, UT runs so much smoother and faster in Ubuntu, it's not funny. UT2k4 (has linux installer on the 1st cd) runs way better in Ubuntu also. You might want to check it out if you have a spare hard drive you can play around with.
Re:Processes v. threads (Score:3, Interesting)
You're misreading. It's not 90% of the problems out there, it's 90% of the code in a given program that's synchronous.
I really doubt that.
Take, for example, the process of reading data from a single input source and processing it. With no other input sources to look at, and no processing that doesn't require the data you're trying to read, exactly what can the code do while the read's completing?
That's not a typical modern server or end user application.
You incur the overhead of creating a thread, and then the parent simply blocks until the child thread completes. Less overhead to simply do the read in the same thread as a simple function call
Who was suggesting that we should call functions in a seperate thread when the operation is synchronous? My example was an example of how silly it was for you to state that a shared address space between async operations weren't needed.
You incur the overhead of creating a thread, and then the parent simply blocks until the child thread completes.
I would think someone with your experience would know about something as basic as thread-pooling.
Most of a program is like that: the caller can't proceed until the function it's called returns it's results, so running the called function in a seperate thread doesn't actually result in any parallelization
That most programs today aren't GUI applications where the UI should not be *blocked* whilst some request is processed? Even something as basic as IO can easily be made asynchronous as there is almost always some post processing to be done after a file or part of a file is loaded into memory that can be done whilst the next block is read.
Most of a program is like that: the caller can't proceed until the function it's called returns it's results, so running the called function in a seperate thread doesn't actually result in any parallelization
Sounds like you need to be more creative in how you design your applications. Most modern applications aren't top-down flow chart style processes.