Follow Slashdot stories on Twitter


Forgot your password?
Intel Software Linux

High Performance Linux Kernel Project — LinuxDNA 173

Thaidog submits word of a high-performance Linux kernel project called "LinuxDNA," writing "I am heading up a project to get a current kernel version to compile with the Intel ICC compiler and we have finally had success in creating a kernel! All the instructions to compile the kernel are there (geared towards Gentoo, but obviously it can work on any Linux) and it is relatively easy for anyone with the skills to compile a kernel to get it working. We see this as a great project for high performance clusters, gaming and scientific computing. The hopes are to maintain a kernel source along side the current kernel ... the mirror has 2.6.22 on it currently, because there are a few changes after .22 that make compiling a little harder for the average Joe (but not impossible). Here is our first story in Linux Journal."
This discussion has been archived. No new comments can be posted.

High Performance Linux Kernel Project — LinuxDNA

Comments Filter:
  • GCC compatibility (Score:2, Interesting)

    by psergiu ( 67614 ) on Thursday February 26, 2009 @07:04PM (#27005367)
    Why don't they try to make ICC fully GCC compatible so we can recompile EVERYTHING with ICC and have a 8-9 to 40% performance gain.
  • by whoever57 ( 658626 ) on Thursday February 26, 2009 @07:08PM (#27005429) Journal
    Since all the userland code is still compiled with GCC, what overall performance improvement will this bring?

    Ingo A. Kubblin is quoted as saying:

    "... boost up to 40% for certain kernel parts and an average boost of 8-9% possible"

    is that 8-9% overall speedup of applications, or just kernel tasks?

  • by mattaw ( 718560 ) on Thursday February 26, 2009 @07:18PM (#27005563) Homepage
    ...and 40% faster in parts. FACTS - give me some context to judge if this is good or bad.

    Looking at Amdahl's law (golden oldie here) how much time does a PC spend on kernel tasks these days?

  • by steveha ( 103154 ) on Thursday February 26, 2009 @07:38PM (#27005817) Homepage

    A few years ago someone figured out that Intel's compiler was engaged in dirty tricks: it inserted code to cause poor performance on hardware that did not have an Intel CPUID. []

    But perhaps they have cleaned this up before the 10.0 release: []


  • by Jah-Wren Ryel ( 80510 ) on Thursday February 26, 2009 @08:08PM (#27006195)

    A few years ago someone figured out that Intel's compiler was engaged in dirty tricks: it inserted code to cause poor performance on hardware that did not have an Intel CPUID.

    It wasn't necessarily malicious, all the compiler did was default to a "slow but safe" mode on CPUIDs that it did not recognize. Intel's reasoning was that they only tweaked the code for cpus that they had qual'd the compiler against. Seeing as how they were Intel, they were not particularly interested in qualing their compiler against non-Intel chips. In hindsight, what they should have done is add a "I know what I'm doing dammit!" compilation flag that would enable optimizations anyway.

  • Re:compilers? (Score:3, Interesting)

    by gzipped_tar ( 1151931 ) on Thursday February 26, 2009 @08:13PM (#27006253) Journal

    I can't judge because my experience with ICC is minimal. GCC is constantly improving, but I feel it concentrates more on platform support than performance. The GCC team has to work on ARM/MIPS/SPARC/whatever while ICC only need to work on x86.

    So I'm not surprised to see GCC falling behind Intel in x86 performance. In fact, only recently did GCC began to support local variable alignment on the stack, which I think is a basic optimization technique. (See the 4.4 pre-release notes [], search for the phrase "align the stack" in that page)

  • by setagllib ( 753300 ) on Thursday February 26, 2009 @08:22PM (#27006363)

    If your program is malloc-intensive and you care about performance, you may as well just use a memory pool in userland. It is very bad practice to depend upon specific platform optimisations when deciding which optimisations not to perform on your code. Then you move to another operating system like FreeBSD or Solaris and find your assumptions were wrong and you must now implement that optimisation anyway.

  • by Anonymous Coward on Thursday February 26, 2009 @08:57PM (#27006783)

    It wasn't necessarily malicious

    Like Hell it wasn't. Read this and see if you still believe it wasn't malicious. []

    Intel put in code to make all non-Intel parts run a byte-by-byte memcpy().

    Intel failed to use Intel's own documented way to detect SSE, but rather enabled SSE only for Intel parts.

    Intel's C compiler is the best you can get (at least if you can trust it). It produces faster code than other compilers. So, clearly the people working on it know what they are doing. How do you explain these skilled experts writing a byte-by-byte memcpy() that was "around 4X slower than even a typical naive assembly memcpy"?

    People hacked the binaries such that the Intel-only code paths would always be taken, and found that the code ran perfectly on AMD parts. How do you then believe Intel's claims that they were only working around problems?

    I'm pissed at Intel about this. You should be too.

  • Re:GCC compatibility (Score:5, Interesting)

    by forkazoo ( 138186 ) <{moc.liamg} {ta} {snarcesorw}> on Thursday February 26, 2009 @08:58PM (#27006793) Homepage

    Compilers shouldn't need to be compatible with each other; code should be written to standards (C99 or so) and Makefiles and configure scripts should weed out the options automatically.

    Unfortunately, writing an OS inherently requires making use of functionality not addressed in the C standards. If you stick only to behavior well defined by the ISO C standards you *can* *not* write a full kernel. Doing stuff that low level requires occasional ASM, and certainly some stuff dependent on a particular hardware platform. I think that being as compiler-portable as it is hardware-portable should certainly be a goal. The ability to build on as many platforms as possible certainly helps shake out bugs and bad assumptions. But, just saying "clean it up to full C99 compliancy, and don't do anything that causes undefined behavior" would be ignoring the actual reality of the situation, and makes as much sense as porting the whole kernel to Java or Bash scrips.

  • Unimpressed with ICC (Score:5, Interesting)

    by Erich ( 151 ) on Thursday February 26, 2009 @09:19PM (#27007033) Homepage Journal
    We tried ICC on our simulator. The result: 8% slower than GCC. On intel chips. Specifying the correct archtiecture to ICC.

    We were not impressed.

  • Re:GCC compatibility (Score:4, Interesting)

    by NekoXP ( 67564 ) on Friday February 27, 2009 @01:34AM (#27008553) Homepage

    I find it hard to believe that the Linux kernel developers never heard of ICC. Or, to take another example, never used Codewarrior or XL C (IBM's PPC compiler, especially good for POWER5 and Cell) or DIAB (or Wind River Compiler or whatever they call it now). Or even Visual C++. Personally I've had the pleasure of using them all.. they all do things differently, but when you have a development team which is using more than one.. I once worked on a team where most of the developers had DIAB, but they didn't want to pay for licenses for EVERYONE, so it was just for the team leaders and release engineering guys, so we all got GCC instead. We had to be mindful not to break the release builds.. and the work ethic meant everything went pretty much fine all round.

    All of them have at one time or still today produce much better code and have much better profiling than GCC and are used a lot in industry. If the commercial compiler doesn't do what you want or is too expensive, GCC is your fallback. Linux turns this on it's head because it "wants" to use as much free, GNU software, but I don't think the development process should be so inhibited as to ignore other compilers - especially considering they are generally always far better optimized for an architecture.

    As a side note, it's well known that gcc 2.95.3 generates much better code on a lot of platforms, but some apps out there are refusing to compile with gcc 2.x (I'm looking at rtorrent here.. mainly because it's C++ and gcc 2.x C++ support sucks. This is another reason why commercial compilers are still popular :) and some only build with other versions of gcc, patches flying around to make sure it builds with the vast majority, significant amounts of development time is already "wasted" on compiler differences even on the SAME compiler, so putting ICC or XCC support in there shouldn't be too much of a chore, especially since they are broadly GCC compatible anyway.

    Like the article said, most of the problem, and the reason they have the wrapper, is to nuke certain gcc-specific and arch-specific arguments to the compiler, and the internal code is mostly making sure Linux has those differences implemented. There is a decent white-paper on it here []. The notes about ICC being stricter in syntax checking are enlightening. If you write some really slack code, ICC will balk. GCC will happily chug along generating whatever code it likes. It's probably better all round (and might even improve code quality generated by GCC, note the quote about GCC "occasionally" doing the "right" thing when certain keywords are missing) if Linux developers are mindful of these warnings, but as I've said somewhere in this thread, Linux developers need some serious convincing on moving away from GCC (I've even heard a few say "well, you should fix GCC instead", rather than take a patch to fix their code to work in ICC)

  • by Anonymous Coward on Friday February 27, 2009 @09:18AM (#27010627)
    Hey, why doesn't anyone fix the notorious issues in the kernel first? Before playing around with some fancy new compiler... The kernel performance is broken for month, and nobody has fixed it yet. Here, when was this: last October! Last January! And it's still broken! [] []

Reactor error - core dumped!