Anticipatory Scheduler in Kernel 2.5+ Benchmarked 252
gmuslera points to this article at KernelTrap comparing available benchmarks for schedulers available for the 2.5 kernel, with the 2.4's scheduler as a reference poin. "In some cases, the new Anticipatory Scheduler performs several times better than the others, doing a task in a few seconds instead minutes like the others."
Anticipatory (Score:4, Funny)
Re:Anticipatory (Score:5, Funny)
Re:Anticipatory (Score:2, Funny)
Re:Anticipatory (Score:2, Funny)
Heh. More like "because making user-friendly Unix is easier than making unix-sturdy MacOS".
Re:Anticipatory (Score:5, Funny)
C'mon, try grounding your trolling in reality next time. Scheduling on OSX is handled by Mach [apple.com], which was developed at CMU by Avi Tevanian [apple.com], developed at NeXT and brought up to 3.0 at Apple.
Apple uses BSD for its UNIX compatibilty layer, but that doesn't handle scheduling, which is what this article is about.
Now, if you want to say Apple was dumb for chucking A/UX [faqs.org] in the early nineties, then that'd make a much better troll.
Not exactly (Score:2, Informative)
Re:Anticipatory (Score:2, Funny)
Somehow, I just *knew* this was coming. ;)
Ah, grasshopper, but did you know it will be coming to Slashdot yet again in a day or so? Anyone can predict the future; it takes a sage to predict a Slashdot duplicate.
Ohh the agony... (Score:4, Funny)
Obvious joke... (Score:4, Funny)
Oops (Score:3, Funny)
unfotunately (Score:5, Funny)
The task in question was anticipating things, so the test might not be all that fair.
Re:unfotunately (Score:3, Funny)
Actually I was making a joke (that the benchmarking task itself was what the act of anticipating things, which this tool would obviously have a one up on... ). But someone modded me up insightful, so maybe I'm on to something.
It'd be like testing rice painted white in a glass of milk during a snowstorm in a whiteness contest.
I've tried it and it rocks (Score:5, Interesting)
Switching to a new tabbed terminal in fluxbox it takes ages to redraw and switching between virtual desktops is an act of futility.
With 2.5 It get good interactive performance and don't see this effect much at all. For sure this is also a bit due to the new VM code.
Of course I would probably get the best interactivity with the SFQ scheduler but thats secondary in this case. At least xmms doesn't skip with this during very heavy I/O. I do not use the new NPTL code which would help further I suppose.
Re:I've tried it and it rocks (Score:4, Funny)
Fer sure.
Re:I've tried it and it rocks (Score:2)
Preemption patches (Score:3, Informative)
Re:I've tried it and it rocks (Score:3, Informative)
It's far easier to maintain any multiplicity of state machines using threads rather than queues. Think of multi-user servers using stateful protocols. You might have hundreds or thousands of threads.
Re:I've tried it and it rocks (Score:3, Interesting)
Wow, 80 threads to build a state machine? What a nasty solution.
You should chuck whatever language you're working in and replace it with one that supports full co-routines.
Re:I've tried it and it rocks (Score:2)
Ok, you got my attention. I've done co-routines in Basic Assembly and seem to recall some in PL/I (Pl/I-F with the PL/I runtime "owned" by some BAL code). Seems like most "modern" languages have zilch support (except for local static variables). Oh can they simplify some very messy logical abilities.
Re:I've tried it and it rocks (Score:2)
Threads, coroutines - what's the difference?
Now before you bite my head off, I know they are different. But both require individual stacks, coroutines can be emulated with threads - and threads support higher throughput (one thread can do compute while another is blocking on I/O) than single-threaded coroutine implementations.
Re:I've tried it and it rocks (Score:2, Funny)
Re:I've tried it and it rocks (Score:2)
Ketchup scheduler? (Score:3, Funny)
"Anticipation is making me wait"...
Poin..... (Score:5, Funny)
I'm still anticipating the "t" in "point" myself...
Re:Poin..... (Score:5, Funny)
I think i need help...
Re:Poin..... (Score:2)
So any true fan af this game would have caught the pun.
Article text in case of /. effect: (Score:5, Funny)
Re:Article text in case of /. effect: (updated) (Score:2)
<html><body></body></html>
doing one task? (Score:2)
A task? I thought schedulers were for multitasking...
Anti-slashdotting? (Score:5, Funny)
You know, you can ALMOST feel an admin over there just itching to type in "Fuck you Taco! And your site!" instead of the connection stuff...
Re:Anti-slashdotting? (Score:3, Interesting)
Perhaps one better
Microsoft version: (Score:5, Funny)
Computer: I have anticipated you would like to open IE and have already opened it for you.
Me: Ok, then I would like to go to the game review site to see what I want to buy.
Computer: I have already begun the download of the new Age of Empires game, your account has been charged.
Me: Can I at least go to the bathroom?
Computer: No.
Re:Microsoft version: (Score:4, Funny)
Computer: I have already run auto update. all your warez are belong to us.
Re:Microsoft version: (Score:3, Funny)
Either way... (loading double-barreled shotgun)
(credit to Dilbert/Scott Adams)
Don't click that link (Score:3, Funny)
Re:Don't click that link (Score:5, Funny)
httpd: "Intensify the firewall, I don't want anything to get through."
iptables: "Httpd, look!"
httpd: "INTENSIFY THE FIREWALL!"
iptables: "Too late!"
Re:Don't click that link (Score:2)
Everyone knows it doesn't matter how much you intensify the firewall. You obviously should have remodulated it with a hyperspanner.
-
ObComicBookGuy (Score:2)
It's a HYDROSPANNER. Your innacurate Star Wars references make me laugh.
Re:Don't click that link (Score:2)
This one goes here, that one goes there!
It's about time... (Score:5, Funny)
Re:It's about time... (Score:2, Funny)
Oh, come on now... (Score:2)
Re:It's about time... (Score:2)
That'll fix the problem. Honestly. Promise! =)
*looks serious*
Re:It's about time... (Score:5, Interesting)
If a site is down, Mozilla could pretty easily grab the google cache instead. Or, if there's no google cache or if the cache matches the current page, check archive.org. Mozilla could auto-generate a page offering the user some options. Think about it - it would be the end of 404 errors. Instead of
404 The requested page could not be found.
you could get
The site you requested is currently down. Would you like to use Google's cache [216.239.51.100] instead? I also have a snapshot of the page you requested from August 12, 2002 [archive.org] but older ones are available here [archive.org].
Re:It's about time... (Score:2)
Re:It's about time... (Score:5, Insightful)
Re:It's about time... (Score:2)
I wonder if anyone bothered to ask GOOGLE if it was ok. They did just threaten to sue some guy for using the term GOOGLE as a verb on his website. Had an article on it here just 2 days ago.
I'm too lazy to look up the original link from 2 days ago. Just wait a day or two and read it then, I'm sure it will get duped
Re:It's about time... (Score:2, Informative)
No, they didn't. And it wasn't a cease-and-desist letter. All they said was, "Hi, I'm a google lawyer (TM). We like our trademark, which for obvious reasons you'll understand. Could you please indicate that google is a trademark in your definition, or, if it's easier, just remove it? Thanks.".
Unlike you, I'm not too lazy to type "google" into the Slashdot search box you probably see at the top-right of this page, click search, and click through to the article [slashdot.org] and from there the (NON-cease-and-desist) letter [linguistlist.org].
It reads, in full:
Now quit spreading FUD. I love Google.
Re:It's about time... (Score:2)
The letter was polite and reasonable. I suppose you'd prefer the usual, I am a lawyer god, you are unworthy of even licking the dirt from my boots, obey me or else letters?
They didn't even ask that he remove the definition, just to note that it is TM. If only other lawyers would read that and learn!
Re:It's about time... (Score:2)
Where do you think the mozilla hackers live?
Re:It's about time... (Score:2)
Why not instead use Gnutella or one of the lovely peer to peer swarming protocols to download it instead? That's what they're for. I don't understand why Mozilla doesn't already do this when several of these protocols already work explicitly on URLs.
Re:It's about time... (Score:3, Funny)
Re:It's about time... (Score:2)
how it works (Score:5, Informative)
where it explains what anticipatory scheduling does.
(btw, it seems that freebsd had it for ages)
Rocky Horror Linux Show (Score:5, Funny)
Ok.. I'll bite... what's a Scheduler? (Score:4, Interesting)
Re:Ok.. I'll bite... what's a Scheduler? (Score:5, Informative)
The Anticipatory Scheduler is designed to optimize your disk I/O based on the assumption that reads of related material tend to happen in short succession, while writes tend to be singular and larger. As a result, when the scheduler encounters a read, it anticipates that there will be other reads in short succession, so it waits and then checks for them and, if they're there, they move to the front of the line. The name comes from the tiny waiting period when it anticipates future reads.
This is, of course, a condensed version of what I think I've learned from reading KernelTrap for the last few months. Someone's bound to tell you I'm talking arse.
Re:Ok.. I'll bite... what's a Scheduler? (Score:5, Informative)
FCFS (first come first serve) - easy stupid way. Take requests as they come. If you need front end front end, performance suffers because obviously you want to do front front end end.
SSTF (shortest seek time first) - do the request that is shortest to the head first. The problem with this is if you keep asking for stuff at the front of the disk and have a lone request for the end of the disk, the lone request could get ignored for a long time (starved) since the scheduler does the stuff at the front since seek times are much lower.
SCAN - the head starts at one end of the disk and goes to the end, servicing requests along the way, but never going back so that that lone request from the previous method does get serviced. When it gets to the end the head moves back toward the front, servicing requests along the way. It keeps going back and forth.
C-SCAN -Variant of SCAN where it doesn't go back and forth. It goes from front to back servicing requests then goes all the way back to the front before it starts servicing again. It gives more uniform times because if you're using SCAN and your request at the beginning is just missed by the head, you have to wait until it goes all the way to the other end and comes all the way back. In this method it goes to the end and then you're the next request to be serviced if you are at the beginning.
LOOK - The same as CSCAN and SCAN except it doesn't blindly go to the end of the disk; it stops and turns around when there aren't any more requests in the direction. Of course, if you show up right after the head changes direction, sucks to be you
kernel 2.5 (Score:2)
doing a task in a few seconds instead minutes like the others
Excellent point the article! I'm glad I saw this Slashdot first. I can't wait the new kernel to be released.
Disk I/O, not CPU schedulers (Score:5, Informative)
Re:Disk I/O, not CPU schedulers (Score:4, Funny)
isn't this "news" quite old? (Score:2, Interesting)
I remember Ingo Molnar introducing this scheduler running in O(1) time months ago, sometimes late in 2002... AFAIK it is a part of the 2.5 kernel for quite a long time.. and at the time it was first tested there were some benchmarks.. I vaguely remember something about "we tried to launch several hundreds of processes, w.o. the scheduler: 15 minutes, w. the scheduler: 2 seconds." So what is so new about some benchmarks being available?
Or am I completely off-topic? ;)
Re:isn't this "news" quite old? (Score:2)
Re:isn't this "news" quite old? (Score:3)
Good stuff, but... (Score:5, Informative)
Re:Good stuff, but... (Score:2)
So did Kerneltrap-readers:
http://www.kerneltrap.org/node.php?id=574
Ouch! Touché!
Check the mailing list for more info (Score:5, Informative)
Parallel streaming reads:
Here we see how well the scheduler can cope with multiple processes reading
multiple large files. We read ten well laid out 100 megabyte files in
parallel (ten readers):
for i in $(seq 0 9)
do
time cat 100-meg-file-$i >
done
2.4.21-pre4:
0.00s user 0.18s system 2% cpu 6.115 total
0.02s user 0.22s system 1% cpu 14.312 total
0.01s user 0.16s system 0% cpu 37.007 total
2.5.61+hacks:
0.01s user 0.16s system 0% cpu 2:12.00 total
0.01s user 0.15s system 0% cpu 2:12.12 total
0.01s user 0.19s system 0% cpu 2:13.51 total
2.5.61+CFQ:
0.01s user 0.16s system 0% cpu 50.778 total
0.01s user 0.16s system 0% cpu 51.067 total
0.01s user 0.18s system 0% cpu 1:32.34 total
2.5.61+AS
0.01s user 0.17s system 0% cpu 27.995 total
0.01s user 0.18s system 0% cpu 30.550 total
0.01s user 0.16s system 0% cpu 34.832 total
streaming write and interactivity:
It peeves me that if a machine is writing heavily, it takes *ages* to get a
login prompt.
Here we start a large streaming write, wait for that to reach steady state
and then see how long it takes to pop up an xterm from the machine under
test with
time ssh testbox xterm -e true
there is quite a lot of variability here.
2.4.21-4: 62 seconds
2.5.61+hacks: 14 seconds
2.5.61+CFQ: 11 seconds
2.5.61+AS: 12 seconds
Streaming reads and interactivity:
Similarly, start a large streaming read on the test box and see how long it
then takes to pop up an x client running on that box with
time ssh testbox xterm -e true
2.4.21-4: 45 seconds
2.5.61+hacks: 5 seconds
2.5.61+CFQ: 8 seconds
2.5.61+AS: 9 seconds
copy many small files:
This test is very approximately the "busy web server" workload. We set up a
number of processes each of which are reading many small files from different
parts of the disk.
Set up six separate copies of the 2.4.19 kernel tree, and then run, in
parallel, six processes which are reading them:
for i in 1 2 3 4 5 6
do
time (find kernel-tree-$i -type f | xargs cat >
done
With this test we have six read requests in the queue all the time. It's
what the anticipatory scheduler was designed for.
2.4.21-pre4:
6m57.537s
6m57.916s
2.5.61+hacks:
3m40.188s
3m56.791s
2.5.61+CFQ:
5m15.932s
5m50.602s
2.5.61+AS:
0m44.573s
0m53.087s
This was a little unfair to 2.4 because three of the trees were laid out by
the pre-Orlov ext2. So I reran the test with 2.4.21-pre4 when all six trees
were laid out by 2.5's Orlov allocator:
6m12.767s
6m13.085s
Not much difference there, although Orlov is worth a 4x speedup in this test
when there is only a single reader (or multiple readers + anticipatory
scheduler)
Re:Check the mailing list for more info (Score:2)
Just wondering...
Re:Check the mailing list for more info (Score:2)
2.5 is the bomb so far, in so many ways.. (Score:5, Insightful)
When I first started hacking on Linux, I was working with a seasoned Linux kernel hacker who my company hired as a consultant. He helped us with some I/O issues and such, did some other tweaks and gave us a ton of inspiration to go get after it ourselves. (You be amazed at how many people are afraid to just start making changes to kernel code) He is a wickedly cool individual and as someone whose had a lot of schooling and experience it was one of the best learning experiences I can remember.
The first thing I started dorking with after that experience was the scheduler because I, like all other hakers, know how to schedule stuff. At the time, (early 2.x) the scheduler was also a fairly easy to digest piece of code that could have impacts on the system in great ways.
Well all my stuff got bit bucketed. I called up our consultant guy who my friend by now, "what's the deal? Linus doesn't like my stuff. How do you mail him stuff?" And his answer was that pretty much every body wants to tweak the scheduler, everybody sends stuff in. Linus is sage in his wisdom, schedulers are freaking hard because there is always a pedantic worst case that sucks and actually shows up in the real world. Linus has always done fairly simple things that aren't best but certainly aren't worst. So 2.0 had pretty straight round robin. 2.2 and 2.4 they started to add queuing schedulers with niceness. 2.5 we're going to get a pretty killer scheduler that has taken a ton of effort to tweak and there are still discussions to expose parameters to the user via /proc or something because you can find cases were it doesn't perform as well.
Now this IO scheduler is opening up a whole new can of worms, it's a new chunk of code called "scheduler" and all hackers know scheduling. In the past it has been fairly simple. It should be fun to watch and the kernel is going to kick mucho ass in the end. There will be a lot of talk and debate about this stuff. It's also distilled down to the trusted set that Linus will let play with things called "scheduler"
WOW CS 162 Lecture (Score:3, Interesting)
File I/O primitives (Score:4, Informative)
The problem is that such I/O layers need to be implemented at least partially outside user-space in the case where the file is being simultaneously accessed to allow interprocess coordination. Also, to get best use, everything should use it.
Re:File I/O primitives (Score:4, Informative)
Re:File I/O primitives (Score:3, Interesting)
At the same time I'm not quite sure where a solution like adaptive I/O scheduling would help on a real system because, in our example of an Apache server, whilst it is stalled writing a web page to you over TCP, it can be reading another page off disk for me. In a true server load, there is little think time because the next request comes in.
Re:File I/O primitives (Score:2)
Yes, but it doesn't seem to be getting it right hence the problem requiring adaptive I/O scheduling. Certainly ext2 should automatically determine that, for example, our html web page read by Apache is sequential and the data should be read-ahead buffered. However, this doesn't seem to be helping.
You're misunderstanding the issue here. Linux does read-ahaead on long linear reads just fine. That's not the issue. The issue is primarily not a server issue, in fact. It's primarily a desktop/workstation issue.
There are often multiple processes on any machine wanting to do I/O simultaneously. Since the hard drive can only process one request at a time, the processes are competing for a limited resource (hard drive thoroughput). Thus, their access has to be scheduled properly to ensure the most efficient use of the hard drive.
The reason this is a big win is because of a system's apparent speed, or interactiveness. If you've ever been processing audio at a workstation and tried to open a calculator or an xterm or something like that while streaming media is being written to disk, you would know that it sometimes appears as if the computer has practically locked up, except for the sound of the hard drive. If reads can be pushed to the head of the queue, this helps immensely, for several reasons.
There's a couple more examples I could give, but I'm sure you get the picture. This doesn't have anything to do with read-ahead. That works just fine :)
Re:File I/O primitives (Score:3, Informative)
There we go again... (Score:2)
Seriously though, Morton's a great chap and one of the few that has really worked for also a desktopwise usable kernel (low latency patches, lock breaking patches, and the list would go on forever).
I can barely wait for 61-AS to compile...
multiple write tasks, why ! que it with shell? (Score:2, Interesting)
Why not use the download manager programs... for all file transfering?
My priorities:
1. user interface responds effectively in realtime.
2. CD writes don't fail
3. Video doesn't skip
4. files transfer quickly.
I would actually like the ability to switch the mode of the file schedualer.
If I am not doing 2. or 3. then why not switch to something that makes 4 happen?
I saw something rediculous, like a 10 second wait for a login prompt??!?!
The system should have that all ready ahead of time, and it should take no more than
--I don't care about spelling enough to spend the vast quantity of time to get this to the spell checker.
Separating devices? (Score:2)
Anticipatory scheduling is: (Score:3, Informative)
Here [rice.edu] is the explanation of what anticipatory scheduling is. From what I have understood (please correct me if I am wrong, I am not a kernel hacker), 'anticipatory scheduling' means the following:
The I/O subsystem (the part of the operating system that reads/writes to/from the hard disk) waits a little longer before servicing an I/O request from an application other than the current one; if the current application issues another I/O request while the I/O subsystem is waiting, the overall system throughput is higher because the hard disk's head moves less.
Re:Great! (Score:2)
On a side note I've been running 2.5s for about 3 months now and I've definatly noticed improvements in response time and such. It's a great kernel.
Re:Great! (Score:2, Interesting)
Oh, and while ur at it u might want to take a look at devfs too (I run it without devfsd and I didn't have any problems, u just need to change inittab from ttyX to vc/X, fstab and lilo/grub).
The only kinks I had was with xine which doesn't like v4l, but mplayer works fine... Hmm, oh and same FPS for Q3, I doubt that any of u will feel much difference anywhere (unless u didn't use the preempt patch for 2.4). But I haven't played with NPTL yet, does anybody have any experience with that?
Re:Great! (Score:4, Interesting)
Although I _suspect_ they will run fine on 2.5, I don't want to risk it. It's still a little too bleeding edge for me. They call it bleeding edge for a reason, because you _will_ bleed and get hurt from time to time.
I guess I am a big fat ninny when it comes to bleeding edge stuff (although I do lust for all the new toys, the waiting just increases my contentness when such cool stuff gets part of stable stuff) :-)
Speaking of avoiding the bleeding edge, it would be sooo cool if this IO scheduler was backported to 2.4.
Re:Great! (Score:5, Funny)
* These chips are basically fucked by design, and getting this driver
* to work on every motherboard design that uses this screwed chip seems
* bloody well impossible. However, we're still trying.
Re:Great! (Score:5, Informative)
Re:Great! (Score:2)
Re:It would be neat... (Score:2)
$ vi foo.c
$ make
$ gdb foo
$ vi foo.c
$ make
$ gdb foo
etc.
This shell could pick up on the pattern and anticipate what you were about to do.
Re:It would be neat... (Score:2)
What about scripting?
The cynic in me figures that most of these chrome-and-tailfin features are about driving you to buy more hardware as much as they are about improving the computing experience.
Re:It would be neat... (Score:2)
Re:It would be neat... (Score:4, Informative)
I have never done it, but it is supposed to work. Unfortunately, it is pretty much limited to static analysis -- it doesn't allow for programs whose usage patterns change with time. For that you need some kind of dynamic recompilation, such as provided by HP's Dynamo, Transmeta's code morphing, or perhaps some Java JITs (I don't know if any of them implement this).
Personally, I think profile directed optimization done by a static compiler is a waste of time. All optimizations should be done at the best place, and for many optimizations, that is the static compiler, but many others can be better done by run time optimizers, or the CPU, and this is one of them.
Re:It would be neat... (Score:2)
I'm not sure if it's made it into version 1.1, but this kind of runtime profiling is a planned feature for the .NET framework.
On a related note, I've given up trying to install the copy of Visual Studio .NET MS gave me, after the installation crashed for the 5th time. Out of patience error.
Profile Guided Optimisation (Score:2)
Re:cache / copy / mirror of page is here (mod up!) (Score:5, Informative)
mirror of story to be found here [stuwo.net]
http://www.stuwo.net/download/ktrap.html
Re:cache / copy / mirror of page is here (mod up!) (Score:2)
It's exempted from the slashdotting message.
Re:What about other OSen? (*BSD, Windows) (Score:3, Informative)
Also, I doubt that one could alter the I/O scheduler (let alone install an alternative) in the win* operating systems.
The AS I/O scheduler is very very interesting. I hope some kind soul would backport it to 2.4.
Re:Anyone notice 2.4.18 is sticky? (Score:2)
Re:Finally (Score:5, Interesting)
Nah, it handles certain start-up costs for complex applications better. This may or may not have anything to do with multithreading per se.
I don't run KDE, but I understand that it has had speed issues in the past because it uses a lot of interconnected C++ shared libraries, which really tax the dynamic loader. The Windows link scheme, by the way, is much more primitive (read: fast at runtime). Microsoft also uses a hack (disk layout profiling) to speed up load time further. (Not that "hack" is necessarily a bad thing - after all it does get the job done.)
A couple of years ago, Jakub Jelinek came up with a utility similar to IRIX Quickstart for ELF binaries / libraries, which does "prelinking" to dramatically reduce relocation overhead at runtime in the common cases (without sacrificing flexibility, for the uncommon cases). A side effect is reducing memory usage due to COW. I never heard what happened to that project - anyone know if it is considered production-quality yet, or if binutils / glibc will be shipping it any time soon? Apparently it helped KDE quite a bit.
Re:Finally (Score:2, Informative)
KDE has gained quite some speed with the last version changes. The gap is not as large as you remember.
Linux != (KDE || GNOME) (Score:2)
You don't know what you are talking about. I've run plenty of Linux and M$ configurations, and every time Linux has been faster. My 500Mhz K6-2 Linux system takes less than 30 seconds to boot, including the 10-15 seconds the BIOS takes.
Yeah whatever. Linux handles forking many times better.
What are you comparing against? Mozilla on Linux vs IE on Windows? You do realize MS preloads the IE binaries on boot? If it's Mozilla on each platform, I'd want to know why it takes longer. Does your binary have GNOME/KDE dependencies? The developers may also have put in MS Windows specific stuff--Netscape's primary focus was and is MS Windows for the most part, and their developers suck--which is why Mozilla/Netscape Navigator are so bloated and crappy in the first place.
The reason it takes so long to start up because it is a bloated piece of crap, and the other apps who take minutes to come up must be bloated too. (you mentioned KDE, it is not essential, and it is very bloated. It also requires a daemon called DCOP, which probably accounts for half the startup time.) This is why your "linux" apps take a long time to start.
I don't run KDE or GNOME. Most of my programs don't take more than 5 seconds to load, and start instantly if their files are already in my cache. The only two that don't are GIMP and Mozilla. The GIMP takes less than 15 seconds, and only so because it has tonnes of plugins to load. Mozilla is bloated like I said.
Yes, KDE's performance. Not the performance of Linux. Not XFree86's performance. KDE's performance. You do not need KDE or GNOME to run a Linux workstation. I don't get why people do this.
A bogus statement. If you have a good compiler[1] and use optimizing flags, then cross-"platform"[2] programs don't have a significant performance hit, and not any more than MS's designs. There are some types of applications where assembly language (meaning processor specific instructions) will make the program perform better, but if you look closely, many of the open source projects which will benefit from this already do it--for at least the IA32 (80386+) processors, if not others.
[1] gcc 2.95.3 is good enough for me, but I hear Intel's Linux compiler works even better for processors made by Intel. The assumption where cross platform applications don't perform well is FUD spread by Microsoft.
[2] Here I think you mean different processor designs, but it can apply to different operating systems too.
Re:Finally (Score:2, Interesting)