High Performance Linux Kernel Project — LinuxDNA 173
Thaidog submits word of a high-performance Linux kernel project called "LinuxDNA," writing "I am heading up a project to get a current kernel version to compile with the Intel ICC compiler and we have finally had success in creating a kernel! All the instructions to compile the kernel are there (geared towards Gentoo, but obviously it can work on any Linux) and it is relatively easy for anyone with the skills to compile a kernel to get it working. We see this as a great project for high performance clusters, gaming and scientific computing. The hopes are to maintain a kernel source along side the current kernel ... the mirror has 2.6.22 on it currently, because there are a few changes after .22 that make compiling a little harder for the average Joe (but not impossible). Here is our first story in Linux Journal."
GCC compatibility (Score:2, Interesting)
Re:GCC compatibility (Score:5, Insightful)
Compilers shouldn't need to be compatible with each other; code should be written to standards (C99 or so) and Makefiles and configure scripts should weed out the options automatically.
Yes! (Score:3, Insightful)
Re:GCC compatibility - Time to move to Java? (Score:5, Funny)
They should think about moving to a Java kernel. They could just bootstrap one of the new, clever "Just-In-Time" Virtual Machines at powerup.
These JVMs are able to dynamically optimize the running code in real-time, far beyond what could be achieved by C or C++ compilers, without any performance degradation.
A Java kernel would likely run at least 50 times faster then the very best hand coded assembler - and since the language is completely type-safe and doesn't implement dangerous legacy language features such as pointers or multiple-inheritance then it would be unlikely to ever crash.
Re:GCC compatibility - Time to move to Java? (Score:4, Funny)
A Java kernel would likely run at least 50 times faster then the very best hand coded assembler
I'm going to have to agree with you here. However with all the major browser producers concentrating on JavaScript speed recently, I'd say it's much better to use JavaScript instead of plain Java. Think about it, JavaScript is where the speediness is. Also, since almost every browser supports it, you could just boot the kernel using any browser. This could potentially get the kernel out of the hands of that bunch of self-righteous, elitist Linux hackers who are currently totally disconnected from users like you and I.~
Re: (Score:3, Informative)
Re:GCC compatibility - Time to move to Java? (Score:4, Informative)
Java is not a "systems language", meaning you don't write operating systems and systems level code in it for very good reasons.
Funny cause Sun already did that like 13 years ago.
One of them being, name me a processor that can run Java bytecode nativly.
The ARM9E.
Re: (Score:2)
I understand those words but not their meaning together.
Re:GCC compatibility (Score:4, Informative)
(Hint: there is no standard way)
Re:GCC compatibility (Score:5, Informative)
There isn't one, so what you do is use pragmas (I remember #pragma pack(1)) or attributes (__attribute__((packed)) or something similar.
Of course they're compiler-specific but there's no reason that code can't be written wrapped in defines or typedefs to stop compiler-specific stuff getting into real production code nested 10 directories down in a codebase with 40,000,000 lines.
Linux does an okay job of this - but since coders usually reference the compiler manual to use these esoteric pragmas and types, they are usually told "this is specific to GCC" (GCC does a good job of this anyway) so they should be wrapping them by default to help their application be portable and maintainable to future compilers (especially if they change the attribute name or the way it works - as has been done on many a GCC, let alone other compilers).
What usually nukes it (and why linux-dna has a compiler wrapper) is because they're hardcoding options and doing other weird GCC-specific crap. This is not because they are lazy but because the Linux kernel has a "we use GCC so support that, who gives a crap about other compilers?" development policy and it usually takes some convincing - or a fork, as linux-dna is - to get these patches into mainline.
Re: (Score:2)
Like the Kernel authors i could do everything you say and still have my code break in a compiler that does things differently.
The only real solution is for compilers to all start doing things in a fairly standard way. Which leads us back to the great-grandparents suggestion...
Re:GCC compatibility (Score:4, Interesting)
I find it hard to believe that the Linux kernel developers never heard of ICC. Or, to take another example, never used Codewarrior or XL C (IBM's PPC compiler, especially good for POWER5 and Cell) or DIAB (or Wind River Compiler or whatever they call it now). Or even Visual C++. Personally I've had the pleasure of using them all.. they all do things differently, but when you have a development team which is using more than one.. I once worked on a team where most of the developers had DIAB, but they didn't want to pay for licenses for EVERYONE, so it was just for the team leaders and release engineering guys, so we all got GCC instead. We had to be mindful not to break the release builds.. and the work ethic meant everything went pretty much fine all round.
All of them have at one time or still today produce much better code and have much better profiling than GCC and are used a lot in industry. If the commercial compiler doesn't do what you want or is too expensive, GCC is your fallback. Linux turns this on it's head because it "wants" to use as much free, GNU software, but I don't think the development process should be so inhibited as to ignore other compilers - especially considering they are generally always far better optimized for an architecture.
As a side note, it's well known that gcc 2.95.3 generates much better code on a lot of platforms, but some apps out there are refusing to compile with gcc 2.x (I'm looking at rtorrent here.. mainly because it's C++ and gcc 2.x C++ support sucks. This is another reason why commercial compilers are still popular :) and some only build with other versions of gcc, patches flying around to make sure it builds with the vast majority, significant amounts of development time is already "wasted" on compiler differences even on the SAME compiler, so putting ICC or XCC support in there shouldn't be too much of a chore, especially since they are broadly GCC compatible anyway.
Like the article said, most of the problem, and the reason they have the wrapper, is to nuke certain gcc-specific and arch-specific arguments to the compiler, and the internal code is mostly making sure Linux has those differences implemented. There is a decent white-paper on it here [intel.com]. The notes about ICC being stricter in syntax checking are enlightening. If you write some really slack code, ICC will balk. GCC will happily chug along generating whatever code it likes. It's probably better all round (and might even improve code quality generated by GCC, note the quote about GCC "occasionally" doing the "right" thing when certain keywords are missing) if Linux developers are mindful of these warnings, but as I've said somewhere in this thread, Linux developers need some serious convincing on moving away from GCC (I've even heard a few say "well, you should fix GCC instead", rather than take a patch to fix their code to work in ICC)
Re: (Score:2)
> I've even heard a few say "well, you should fix GCC instead"
Well what's wrong with that? If GCC is parsing "bad" code without giving warnings, then GCC should be fixed. The bad code can be fixed to avoid those warnings.
Re: (Score:2)
Why they don't do it:
1) Historic reasons. The project started small with GCC and the code has been building from there. Through historic code it became increasingly more difficult to use non-GCC compilers.
2) Compatibility: As no. 1, the project start with GCC and requires a bit of rewriting in the core functions that the people with non-gcc compilers won't always understand. It might also be difficult to find alternatives stuff that isn't in the C/C++ standard but can be done compiler specific. It would bec
Re:GCC compatibility (Score:5, Interesting)
Unfortunately, writing an OS inherently requires making use of functionality not addressed in the C standards. If you stick only to behavior well defined by the ISO C standards you *can* *not* write a full kernel. Doing stuff that low level requires occasional ASM, and certainly some stuff dependent on a particular hardware platform. I think that being as compiler-portable as it is hardware-portable should certainly be a goal. The ability to build on as many platforms as possible certainly helps shake out bugs and bad assumptions. But, just saying "clean it up to full C99 compliancy, and don't do anything that causes undefined behavior" would be ignoring the actual reality of the situation, and makes as much sense as porting the whole kernel to Java or Bash scrips.
Re: (Score:2)
See my other reply on the topic.
I fully understand the limitations of the C99 standard, but there are also ways to stop your code being tied to a compiler which it seems a lot of coders simply do not bother to use because supporting GCC is their only goal.
Re: (Score:2, Informative)
Amazing. You have no idea what you're talking about :D
C99 doesn't stop you writing interrupt code OR threaded code.
Re:GCC compatibility (Score:5, Informative)
On the contrary - there is no support for threads or interrupts whatsoever in C99. Sure, there's pthreads and the like - but those are not part of C99, nor can you implement them in pure C99.
C itself (all versions) tries very hard to avoid tying itself to any specific hardware or OS. It even supports weird things like platforms with more than 8 bits in a char, or with reserved bits in their integers. But as a result, it has only the bare minimum featureset common across all platforms imaginable, and this is why it's very hard to write anything useful with only pure C. (No networking, no listing the contents of a directory, no executing any other programs except via system()...)
For most userland applications, C plus some OS-dependent libraries are good enough, of course. Things like the POSIX API can't be implemented in regular C (at some level you have an assembly call to the OS's syscall interface), but if you treat it as opaque, no problem.
But for an OS kernel, things aren't that easy. In the quest for high performance, Linux does all kinds of neat hacks, including things like inlining assembly code into C functions - and later rewriting that code on the fly (google for 'smp alternatives' for more information). It also makes use of CPU-level atomic operations - and exactly which ones are available depend on the architecture. Because of these kinds of hacks, which produce noticeable speed improvements, it is utterly impossible to stick purely to standards like C99.
Re: (Score:2)
Re: (Score:2)
There definitely are standards for executables; otherwise, how would a processor know what the program wanted it to do?
As in, the folks at gcc, don't reverse engineer a pentium chip to figure out how how to get it two add two numbers together. They look up the binary standard for "x86" or the like.
Re:GCC compatibility (Score:5, Insightful)
:)
I think the point is that ICC has been made "gcc compatible" in certain areas by defining a lot of pre-baked defines, and accepting a lot of gcc arguments.
In the end, though, autoconf/automake and cmake and even a hand-coded Makefile could easily abstract the differences between compilers so that -mno-sse2 is used on gcc and --no-simd-instructions=sse2 on some esoteric (non-existent, I made that up) compiler. I used to have a couple of projects which happily ran on BSD or GNU userland (BSD make, GNU make, jot vs. seq, gcc vs. icc vs. amiga sas/c :) and all built fairly usable code from the same script automatically depending on the target platform.
The over-reliance of the Linux kernel and it's hardcoded options for GCC means you have to port GCC to your platform first, before you can use a compiler which may already be written by/for your CPU vendor (a good example was always Codewarrior.. but that's defunct now)
Of course there is always configure script abuse; just like you can't build MPlayer for a system with less features than the one you're on without specifying 30-40 hand-added options to force everything back down.
A lot of it comes down to laziness - using what you have and not considering that other people may have different tools. And of course the usual Unix philosophy that while you may never need something, it should be installed anyway just because an app CAN use it (I can imagine using a photo application for JPEGs alone, but they will still pull in every image library using the dynamic linker, at load time.. and all these plugins will be spread across by disk)
Re: (Score:2)
GCC itself is rather prolific... Is there any noteworthy platform that it doesn't already support?
Re: (Score:3, Funny)
GCC itself is rather prolific... Is there any noteworthy platform that it doesn't already support?
Commodore 64?
Re: (Score:2)
None but you should think about the hurdles of porting it to a non-POSIX operating system like AmigaOS (yes they did..) and MorphOS (which is like AmigaOS but the GCC port supports a bunch of craaazy extra options) and OMG think of the children!!!!!!!
Both of those had to rely on a special portability library (newlib port in the first instance, and the ancient "ixemul" library in the second instance) to get it to work, notwithstanding the actual platform features and ABI support.
Maybe they're not noteworthy
Re: (Score:2)
AVR chips are brilliant for low level embedded stuff.
Full GCC support of course.
Re: (Score:2)
GCC versus ICC (Score:2)
ICC is hands down the best C++ compiler for x86 and x64 from a performance perspective. GCC isnt even in the running on that front. All GCC has going for it is that its "free"
Given that we get a few percent benefit from using ICC over GCC after heavy tuning, I'd say that GCC gets the job done pretty well, given that it is a general purpose, multiplatform compiler. And one thing ICC totally sucks at is compiling speed - ICC takes two to three times longer than GCC to compile and link code. When I need to test a feature today, you can guess which compiler I reach for, especially when GCC can take over an hour to complete some of the code bases here.
Cheers,
Toby Haynes
Re: (Score:2)
A lot of it comes down to laziness - using what you have
No, it's called using your tools to their fullest capacity.
Re: (Score:2)
There's no reason you can't build your code to support all the tools you could possibly use to their fullest capacity, though. No reason at all. Except when one tool doesn't do something that the other does that you find important.
I very much doubt any C compiler shipping these days misses the features required to build the kernel, but the kernel developers only care about adding in GCC options and GCC pragmas and attributes.. in spite of those who would prefer to use some other compiler.
Not only icc (Score:2)
Last year Rob Landley was working on getting the Tiny C Compiler [bellard.org] to build the kernel unmodified (again by adding gccisms to tcc) - here's an OLS video of the Landley talking about changing tcc to compile the kernel [free-electrons.com]. Alas, from what I gather this effort has stalled for now.
It is unlikely that you will see the kernel adopting anything that makes the build process much more complicated. Operating system glue layers (e.g. abstractions in code for drivers that are supposed to run on other platforms) are already
Re: (Score:2)
yeah on a comments thread to some wanker who won't even get a Slashdot account..
in the grand scheme of things, not very important, wouldn't you say?
Re: (Score:3, Insightful)
Re:GCC compatibility (Score:4, Informative)
To a large extent, they have. ICC really no longer has the performance lead that it once did over gcc. There was absolutely a time when the difference was consistent, and significant. But, a lot has changed since gcc 2.95, when egcs existed. The 4.x branch in particular has been about improving the optimisation capabilities of the compiler. These days, I generally reccomend just going with gcc to anybody who asks me.
Re:GCC compatibility (Score:5, Informative)
Depends on the CPU... gcc has reasonable performance on x86, but on ia64 or ppc the vendor supplied compilers have a big advantage. even on x86 icc leads by a considerable margin in some areas, especially on very new processors.
Re: (Score:2)
Re: (Score:3, Funny)
... on their own hardware (Score:2)
And that's a problem, and the reason this really isn't all that exciting.
Re: (Score:2)
The performance gap is even bigger on IA64 too...
Portability.. (Score:5, Insightful)
IMHO This is a great development, for one important reason.
Portability of the kernel.
GCC is a great compiler, but relying on it excessively is a bad thing for the quality of kernel code, the wider range of compilers used, the more portable and robust the code should become.
I know there will be the usual torrent of its-just-not-open-enough rants, but my reasoning has nothing to do with that, it is simply healthy for the kernel to be compilable across more compilers.
It also could have interesting implications with respect to the current GCC licensing 'changes' enforcing GPL on the new plugin structures, etc.
GCC is a wonderful compiler however it has in the past had problems with political motivations rather than technical, and moves like this could help protect against those in the future (some of us still remember the gcc->pgcc->egcs->gcc debarcle).
Of course no discussion of compilers should happen without also mentioning LLVM, another valuable project.
Re:Portability.. (Score:5, Insightful)
Prove it.
The opposite (relying on GCC is a good thing for code quality) seems obvious to me. The intersection of GCC and ICC is smaller than GCC, so I would assume that targetting something big would afford greater flexibility in expression. As a result, the code would be cleaner, and easier to read.
Targetting only the intersection of ICC and GCC may result in compromises that confuse or complicate certain algorithms.
Some examples from the linked application include:
I cannot fathom why anyone would think these things are "good" or "healthy", and hope you can defend this non-obvious and unsubstantiated claim.
When pgcc showed up, it caused lots of stability problems, and there were major distribution releases that made operating a stable Linux system very difficult: 2.96 sucked badly.
The fact that gcc2 still outperforms gcc4 in a wide variety of scenarios is evidence this wasn't good for technical reasons, and llvm may prove RMS's "political" hesitations right after all.
I'm not saying gcc4 isn't better overall, and I'm not saying we're not better for being here. I'm saying it's not as clear as you suggest.
Re: (Score:2)
Try compiling your C Real Mode code in GCC and get back to me.
Re: (Score:2)
How is this relevant?
Re: (Score:3, Insightful)
Oh, wait a second, I see the problem here.
You are a moron.
What exactly do you think happens when GCC changes behavior (as it has done in the past, many times) within the C spec?
Perhaps we better freeze on version x.y.z of GCC?
The same would apply to for example assumptions with branch prediction - gcc can and quite probably one day will change behavior - do you really want major features of the kernel to change behavior when this happens?
The good effect this will have when addressed properly (and remember w
Re: (Score:3, Informative)
Oh, wait a second, I see the problem here.
You are a moron.
First up, personal attacks to the parent does not an argument prove, all it does is lessen your credibility.
By supporting a range of compilers we help make the kernel MORE robust to such changes, and these are both highly competent compilers, so the 'intersection' of features is actually most of the C/C++ specs..
Of course the intersection of features are the specs.. because they are the only standardized thing that makes it c, but as has been said, C leaves a LOT to the implementer in order to be flexible, the standard does not specify everything, and operating systems need to run at such a low level that what they deal with is NOT covered in said specs,
Furthermore, as for being 'more robust' to breakage whe
dunno exactly (Score:2)
I might be completely wrong but:
RMS felt that making it easy to produce plugins for GCC would be a very bad idea since closed source could exploit this. We really want GCC improvements to be free software so his hesitation has some merits.
Exactly how this relates to LLVM I dunno..
Re: (Score:2)
LLVM can (and is) used to subvert the GCC's GPL by making it possible to "compile" C code into closed-source proprietary bytecodes. See "Alchemy" for an example of Adobe being an immoral slimeball.
I'd like to add a slimeball exception to software I've written, preventing Adobe from benefitting, and yet I can't bring myself to be immoral just to combat immorality.
Re: (Score:2)
Then work on your reading comprehension. I said no such thing.
I said it isn't obvious that supporting other compilers was a good thing, and that it seemed obvious that actively supporting other compilers (i.e. "more work") had some serious costs that were being underepresented.
Re-read my post. Nowhere did I suggest anyone stop doing what they were doing.
40% faster kernel, but what overall performance? (Score:4, Interesting)
Ingo A. Kubblin is quoted as saying:
is that 8-9% overall speedup of applications, or just kernel tasks?
Re: (Score:2)
I would imagine that it means for the kernel. We would then need to factor in how much time user applications spend in the kernel. Anything that is I/O-intensive is kernel-intensive. Anything that is malloc-intensive may be kernel-intensive if you're using a VM-based memory pool rather than a pre-allocated one.
I'm also wondering how this would compare to using Cilk++ and #defining the few keywords it has to the standard keywords when using vanilla GCC or ICC.
Perhaps there should be a table showing the relat
Re:40% faster kernel, but what overall performance (Score:4, Interesting)
If your program is malloc-intensive and you care about performance, you may as well just use a memory pool in userland. It is very bad practice to depend upon specific platform optimisations when deciding which optimisations not to perform on your code. Then you move to another operating system like FreeBSD or Solaris and find your assumptions were wrong and you must now implement that optimisation anyway.
Re: (Score:2)
We would then need to factor in how much time user applications spend in the kernel. Anything that is I/O-intensive is kernel-intensive.
What do you mean? I don't think icc will speed up my hard drive.
Re: (Score:2)
It won't speed up the hard drive, but it should reduce the latency of a context switch (something like 21 microseconds, isn't it?) and it should also reduce the latency involved in going through the various layers of the kernel.
Yes, this isn't much in comparison to the speed of the drive, but that's not the point. I didn't say it would speed it up by a lot, merely that it would speed up.
I don't know what the latency is within the kernel in the VFS layer or within the different filesystems (ignoring mechanic
My post is 5-9% faster to read overall... (Score:2, Interesting)
Looking at Amdahl's law (golden oldie here) how much time does a PC spend on kernel tasks these days?
It's a Bad Idea. (Score:4, Funny)
You see, I'm a consultant and am paid by the hour.
Re:It's a Bad Idea. (Score:4, Funny)
Re: (Score:2)
compilers? (Score:2, Insightful)
Re: (Score:2)
Does GCC run faster if compiled with ICC?
That would take the biscuit.
Final gcc should be no faster with icc (Score:2)
So the general answer is no it will not be faster. This is because as a final step (the so called stage3) it compiles itself with itself [google.co.uk]. This assumes icc isn't malicious (yes I know - Trusting Trust [bell-labs.com] and Countering Trusting Trust [dwheeler.com] etc).
Re: (Score:2)
Well in that case, is the GCC created in stage 1 of compiling (the one that is compiled using another compiler, in this case ICC) faster than the stage 2 and 3 compilers (created by the ICC-compiled GCC and the GCC-compiled GCC respectively).
Re: (Score:2)
It's quite possible but it may also produce unexpected/unwanted results (remember it only has to be good enough to compile a compiler).
Re: (Score:3, Interesting)
I can't judge because my experience with ICC is minimal. GCC is constantly improving, but I feel it concentrates more on platform support than performance. The GCC team has to work on ARM/MIPS/SPARC/whatever while ICC only need to work on x86.
So I'm not surprised to see GCC falling behind Intel in x86 performance. In fact, only recently did GCC began to support local variable alignment on the stack, which I think is a basic optimization technique. (See the 4.4 pre-release notes http://gcc.gnu.org/gcc-4.4/ch [gnu.org]
Re: (Score:3, Informative)
The GCC team has to work on ARM/MIPS/SPARC/whatever while ICC only need to work on x86.
ICC supports IA-32, Itanium 1 & 2, x86-64, and xscale. Not that it kicks too much of a leg from your argument, but if you are going to argue the point you should at least make it accurate. Ah yeah almost forgot to mention all the extended instruction sets too... SSE, SSE2, SSE3, MMX, MMX2, etc...
Re: (Score:2)
ICC is better at optimization than GCC.
Will this kernel run fast on AMD processors? (Score:5, Interesting)
A few years ago someone figured out that Intel's compiler was engaged in dirty tricks: it inserted code to cause poor performance on hardware that did not have an Intel CPUID.
http://techreport.com/discussions.x/8547 [techreport.com]
But perhaps they have cleaned this up before the 10.0 release:
http://blogs.zdnet.com/Ou/?p=518 [zdnet.com]
steveha
Re:Will this kernel run fast on AMD processors? (Score:5, Interesting)
A few years ago someone figured out that Intel's compiler was engaged in dirty tricks: it inserted code to cause poor performance on hardware that did not have an Intel CPUID.
It wasn't necessarily malicious, all the compiler did was default to a "slow but safe" mode on CPUIDs that it did not recognize. Intel's reasoning was that they only tweaked the code for cpus that they had qual'd the compiler against. Seeing as how they were Intel, they were not particularly interested in qualing their compiler against non-Intel chips. In hindsight, what they should have done is add a "I know what I'm doing dammit!" compilation flag that would enable optimizations anyway.
Re: (Score:2, Insightful)
It was completely intentional. Intel's CPUID protocol defines how to determine the capabilities of a CPU. AMD follows this protocol. Intel could have checked the CPUID for the level of SSEx support, etc. Instead they checked for the "GenuineIntel" string before enabling support for extra instructions that speed up many diverse activities (e.g. copying memory).
Perhaps your gullibility meter needs recalibration.
Re:Will this kernel run fast on AMD processors? (Score:4, Insightful)
Ok I'll bite. By your logic, Intel should:
While I agree that something like --optimize_anyway_i_am_not_stupid would have been a good idea, does it make more sense for Intel to spend money and time making their competition faster? You'd need to make a lot of assumptions to think that optimizations for one CPU will work well for another, even from the same manufacturer. Besides, doesn't AMD have their own compiler?
Re:Will this kernel run fast on AMD processors? (Score:5, Interesting)
It wasn't necessarily malicious
Like Hell it wasn't. Read this and see if you still believe it wasn't malicious.
http://yro.slashdot.org/comments.pl?sid=155593&cid=13042922 [slashdot.org]
Intel put in code to make all non-Intel parts run a byte-by-byte memcpy().
Intel failed to use Intel's own documented way to detect SSE, but rather enabled SSE only for Intel parts.
Intel's C compiler is the best you can get (at least if you can trust it). It produces faster code than other compilers. So, clearly the people working on it know what they are doing. How do you explain these skilled experts writing a byte-by-byte memcpy() that was "around 4X slower than even a typical naive assembly memcpy"?
People hacked the binaries such that the Intel-only code paths would always be taken, and found that the code ran perfectly on AMD parts. How do you then believe Intel's claims that they were only working around problems?
I'm pissed at Intel about this. You should be too.
Re: (Score:2)
Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.
Of course, developers are also free to therefore ignore the compiler, and hence this situation righted itself pretty quickly and naturally.
I wonder, do you consider RMSs current moves on GCC to also be 'malicious' since it in effect could result in lower performance for end users than is possible, and defined along political lines?
Re: (Score:3, Insightful)
Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.
Sure, they can do what they want. But it's generally a bad idea to lie about what you've done once you're caught red-handed. You go from losing a lot of respect to nearly all respect in the minds of many customers.
Re:Will this kernel run fast on AMD processors? (Score:4, Informative)
> Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.
ARM put money into GCC. That's far better than them trying to make their own compiler.
Re: (Score:2)
ARM may have put money into GCC, but... ARM also sells their own compiler [arm.com] - RealView Development Suite (RVDS).
As for why ARM would do both - the RVDS is a high-performance ARM compiler and ARM's code, so they'll put a lot of optimizations into it. (Ignore the ARM/GNU compatibility
Mod parent down (Score:2)
Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.
This sort of "a company can do anything it wants with its own products" comments appear almost every time someone mentions anti-competitive behavior, and then people explain that no, a company should not be allowed to leverage a monopoly position to further entrench itself. Should the government allow practices like, e.g., dumping, the market would be dominated by a very small number of mega-corporations, ruining the economy.
Seriously, are you a troll?
Re: (Score:2, Informative)
Nope, they have not changed that, and I think it is quite bad behavior for Intel.
However the do _not_ insert _bad_ code. What they do is that they prevent code optimized for the newest Intel CPUs to run on non-Intel CPUs, even if all the used instructions is present. I think -xW (use SSE, SSE2, optimize for Pentium4) is the highest that will run on AMD.
However in almost all cases the Intel compilers will still produce the fastest binaries on AMD. Not only compared to GCC, but also compared to other commerci
SSE, SSE2, SS3, SSE4, etc. (Score:2)
I've always wondered if anyone has spent time trying to develop optimizations for the kernel if various specific instruction sets are detected?
letme google that for ya. (Score:2)
Not much.
http://www.google.com/codesearch?q=SSE2+package%3Akernel.org [google.com]
but do you realy want to?
Re: (Score:2)
Both Intel and AMD have contributed code before. You figure if anyone knows how to optimize code for specific processor instruction sets it would be them. It would be a neat way for them to contribute.
This is a good point (Score:2)
There are a few spots of the kernel that do use hand crafted SSE assembly (a quick glance says RAID calculation is one area [linux.no] (also here [linux.no]) and a particular crypto routine [linux.no] is another) but it is quite rare. Up until SSE4, SSE was really targeted at multimedia applications that contained a lot of floating point arithmetic. Generally floating point is avoided within the kernel so the maintenance pain of crafting an SSE optimised routine along with generic C version would not be worth it. Seemingly when you go to w
Re: (Score:3, Informative)
It severely cripples maintenance. Any optimisation, especially one that forks you into multiple parallel implementations (raw C, x86 asm, amd64 asm, amd64 ASM with SSE4, PPC, ....), has to be carefully weighed against its extra maintenance cost.
The parts that do benefit from optimisation, such as RAID parity calculation, symmetric encryption, etc. are already optimised. At any rate I think the kernel developers know a lot more about this than you or I do.
Re: (Score:2)
Unimpressed with ICC (Score:5, Interesting)
We were not impressed.
Re: (Score:2)
I call BS, there are cases where GCC can beat ICC, however there are many more where ICC is significantly better.
My bet, either you are full of BS, or you 'tried' a rather specific and limited codebase.
I also suspect your codebase was developed under gcc and then just thrown at icc? hmmmm?
ICC is a VERY impressive compiler, GCC is a quite good compiler. we are lucky to have both (and then a few other options as well).
Re: (Score:2)
Instead of agressively attacking and answering in generalities ('there are many cases where ICC is beter'), care to explain how you formed your opinion?
Re: (Score:2)
--
2*7*68213 [mazes.com]
This is ancient (Score:2, Insightful)
This kernel is so ancient that any possible performance gains are outweighed by the new kernels performance, bug fixes, and improved driver support. Plus why would someone want to toss away their freedom by using a non-free compiler? Also, does the Intel compiler work with AMD processors?
There is so much against this that it is useless, until Intel open sources, can work with up to date kernels, and can work on all x86 and x86_64 compatible hardware (im not sure if this is a problem) then im not interested.
Re: (Score:2)
How can you throw away your freedom compiling free software sourecode on a non-free compiler? That makes no sense at all.
Thank you, I look forward to trying this. (Score:2)
Re: (Score:2, Informative)
I'm afraid the boost of kernel code won't help you much. Since you're doing fluid physics, I guess the hotspots are in the floating point math computation, and your code doesn't do context switching often. In that case, kernel speed isn't that important.
Well, I'm just saying it. I hope I'm wrong :)
Re: (Score:3, Insightful)
It depends, if the system is distributed, the hotspots (ie performance bottlenecks) could quite easily be in network latency and throughput, something that could be reasonably impacted here.
Of course if its not, you are 100% right, however dont underestimate the proportion of cpu time the kernel spends in some situations (databases and distributed apps, for example).
ROUTING (Score:2)
The HPC and gaming communities probably won't care much about this, aside from the tweakers who spend $500 to overclock a $200 CPU to perform like a $400 CPU. The vast majority of workloads spend very little time in the kernel. The glaring exception to this is the network stack, where you can have a lot of rather CPU-intensive packet-mangling, routing, firewalling, IPSEC tunneling, and header processing done entirely in kernelspace. Ever try saturating a 10 Gbit ethernet interface? If you don't do some
Re: (Score:2)
hum... not that impressive ...
Re: (Score:2)
Other than every supercomputer on the planet worth talking about, that is...
Re: (Score:2)
Re: (Score:2)
Wow one out of 500.. awesome..
Re: (Score:3, Informative)
That's being done too. GCC 4.3 with Profile Guided Optimisation is SWEET. I don't think plain PGO can be run on a kernel (but that would be an awesome project), but it would definitely close the gap between ICC and GCC. ICC's PGO is not as good, or rather, ICC itself is better at making the kind of fuzzy predictions that PGO makes definite.
Re: (Score:2)
Get back to us when the US missile defense system has actually destroyed a foreign missile (assuming that Slashdot is still around then).
Re: (Score:2)
It's not designed to do that. It's designed to suck up as much money as possible whilst simply threatening to down a missile.
Re: (Score:2)
As a certified and accredited software engineer, I think it's time for Linux to be re-written in Javascript. The competition between Chrome, Firefox, IE and Safari has resulted in incredibly fast Javascript interpreters, and if Axl Torvalds mandates a switch to JS, the kernel could automatically take advantage of these improvements. After all, the OS and the web are becoming one, and within 10 years all applications will be in the cloud, delivered via the raintubes.
That way Apple will never be able to block you from booting Linux on the iPhone.
-1 redundant? (Score:2)
Sometimes a thing needs to be said more than once or twice.
In this case, maybe it needs to be said several thousand times.