Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Software Linux

High Performance Linux Kernel Project — LinuxDNA 173

Thaidog submits word of a high-performance Linux kernel project called "LinuxDNA," writing "I am heading up a project to get a current kernel version to compile with the Intel ICC compiler and we have finally had success in creating a kernel! All the instructions to compile the kernel are there (geared towards Gentoo, but obviously it can work on any Linux) and it is relatively easy for anyone with the skills to compile a kernel to get it working. We see this as a great project for high performance clusters, gaming and scientific computing. The hopes are to maintain a kernel source along side the current kernel ... the mirror has 2.6.22 on it currently, because there are a few changes after .22 that make compiling a little harder for the average Joe (but not impossible). Here is our first story in Linux Journal."
This discussion has been archived. No new comments can be posted.

High Performance Linux Kernel Project — LinuxDNA

Comments Filter:
  • by NekoXP ( 67564 ) on Thursday February 26, 2009 @07:06PM (#27005395) Homepage

    Compilers shouldn't need to be compatible with each other; code should be written to standards (C99 or so) and Makefiles and configure scripts should weed out the options automatically.

  • Portability.. (Score:5, Insightful)

    by thesupraman ( 179040 ) on Thursday February 26, 2009 @07:07PM (#27005407)

    IMHO This is a great development, for one important reason.

    Portability of the kernel.

    GCC is a great compiler, but relying on it excessively is a bad thing for the quality of kernel code, the wider range of compilers used, the more portable and robust the code should become.

    I know there will be the usual torrent of its-just-not-open-enough rants, but my reasoning has nothing to do with that, it is simply healthy for the kernel to be compilable across more compilers.

    It also could have interesting implications with respect to the current GCC licensing 'changes' enforcing GPL on the new plugin structures, etc.

    GCC is a wonderful compiler however it has in the past had problems with political motivations rather than technical, and moves like this could help protect against those in the future (some of us still remember the gcc->pgcc->egcs->gcc debarcle).

    Of course no discussion of compilers should happen without also mentioning LLVM, another valuable project.

  • by Anonymous Coward on Thursday February 26, 2009 @07:11PM (#27005471)

    Too bad that C99 (et al.) isn't enough to write a high performance kernel... Not even close (no interupts, no threads, etc, etc...)

  • compilers? (Score:2, Insightful)

    by __aardcx5948 ( 913248 ) on Thursday February 26, 2009 @07:28PM (#27005697)
    So GCC is slow compared to the Intel compiler?
  • Yes! (Score:3, Insightful)

    by Arakageeta ( 671142 ) on Thursday February 26, 2009 @07:36PM (#27005791)
    I completely agree. I ran into this when I was working as a software architect on a project that had been around for a while. Contracts were required compiler compatibility instead of standard compatibility. It made updates to the dev environment much more complicated. The contracts should have specified standards, but its writers didn't know any better-- the customer had no need to stick to a compiler product/version. It also makes your code more dependent upon the compiler's quirks. I would mod you up if I had the points.
  • by NekoXP ( 67564 ) on Thursday February 26, 2009 @07:37PM (#27005805) Homepage

    :)

    I think the point is that ICC has been made "gcc compatible" in certain areas by defining a lot of pre-baked defines, and accepting a lot of gcc arguments.

    In the end, though, autoconf/automake and cmake and even a hand-coded Makefile could easily abstract the differences between compilers so that -mno-sse2 is used on gcc and --no-simd-instructions=sse2 on some esoteric (non-existent, I made that up) compiler. I used to have a couple of projects which happily ran on BSD or GNU userland (BSD make, GNU make, jot vs. seq, gcc vs. icc vs. amiga sas/c :) and all built fairly usable code from the same script automatically depending on the target platform.

    The over-reliance of the Linux kernel and it's hardcoded options for GCC means you have to port GCC to your platform first, before you can use a compiler which may already be written by/for your CPU vendor (a good example was always Codewarrior.. but that's defunct now)

    Of course there is always configure script abuse; just like you can't build MPlayer for a system with less features than the one you're on without specifying 30-40 hand-added options to force everything back down.

    A lot of it comes down to laziness - using what you have and not considering that other people may have different tools. And of course the usual Unix philosophy that while you may never need something, it should be installed anyway just because an app CAN use it (I can imagine using a photo application for JPEGs alone, but they will still pull in every image library using the dynamic linker, at load time.. and all these plugins will be spread across by disk)

  • Re:Portability.. (Score:5, Insightful)

    by mrsbrisby ( 60242 ) on Thursday February 26, 2009 @08:11PM (#27006227) Homepage

    GCC is a great compiler, but relying on it excessively is a bad thing for the quality of kernel code ... it is simply healthy for the kernel to be compilable across more compilers.

    Prove it.

    The opposite (relying on GCC is a good thing for code quality) seems obvious to me. The intersection of GCC and ICC is smaller than GCC, so I would assume that targetting something big would afford greater flexibility in expression. As a result, the code would be cleaner, and easier to read.

    Targetting only the intersection of ICC and GCC may result in compromises that confuse or complicate certain algorithms.

    Some examples from the linked application include:

    • removing static from definitions
    • disabling a lot of branch prediction optimizations
    • statically linking closed-source code
    • tainting the kernel making debugging harder

    I cannot fathom why anyone would think these things are "good" or "healthy", and hope you can defend this non-obvious and unsubstantiated claim.

    (some of us still remember the gcc->pgcc->egcs->gcc debarcle).

    When pgcc showed up, it caused lots of stability problems, and there were major distribution releases that made operating a stable Linux system very difficult: 2.96 sucked badly.

    The fact that gcc2 still outperforms gcc4 in a wide variety of scenarios is evidence this wasn't good for technical reasons, and llvm may prove RMS's "political" hesitations right after all.

    I'm not saying gcc4 isn't better overall, and I'm not saying we're not better for being here. I'm saying it's not as clear as you suggest.

  • by Punto ( 100573 ) <puntobNO@SPAMgmail.com> on Thursday February 26, 2009 @08:43PM (#27006597) Homepage
    Why don't they improve GCC to have a 8-9 to 40% performance gain? it's not like intel has some kind of secret magical piece of code that lets them have a better compiler.
  • by Anonymous Coward on Thursday February 26, 2009 @08:45PM (#27006613)

    It was completely intentional. Intel's CPUID protocol defines how to determine the capabilities of a CPU. AMD follows this protocol. Intel could have checked the CPUID for the level of SSEx support, etc. Instead they checked for the "GenuineIntel" string before enabling support for extra instructions that speed up many diverse activities (e.g. copying memory).

    Perhaps your gullibility meter needs recalibration.

  • This is ancient (Score:2, Insightful)

    by scientus ( 1357317 ) <instigatorircNO@SPAMgmail.com> on Thursday February 26, 2009 @09:51PM (#27007299)

    This kernel is so ancient that any possible performance gains are outweighed by the new kernels performance, bug fixes, and improved driver support. Plus why would someone want to toss away their freedom by using a non-free compiler? Also, does the Intel compiler work with AMD processors?

    There is so much against this that it is useless, until Intel open sources, can work with up to date kernels, and can work on all x86 and x86_64 compatible hardware (im not sure if this is a problem) then im not interested.

  • by thesupraman ( 179040 ) on Friday February 27, 2009 @01:47AM (#27008643)

    It depends, if the system is distributed, the hotspots (ie performance bottlenecks) could quite easily be in network latency and throughput, something that could be reasonably impacted here.

    Of course if its not, you are 100% right, however dont underestimate the proportion of cpu time the kernel spends in some situations (databases and distributed apps, for example).

  • Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.

    Sure, they can do what they want. But it's generally a bad idea to lie about what you've done once you're caught red-handed. You go from losing a lot of respect to nearly all respect in the minds of many customers.

  • by Tokerat ( 150341 ) on Friday February 27, 2009 @03:04AM (#27008993) Journal

    Ok I'll bite. By your logic, Intel should:

    • Spend the time and money to test competitors current CPUs against their compiler.
    • Take the blame when their compiler causes unforseen problems on current or newer models due to changes, or aspects they did not know to test for.

    While I agree that something like --optimize_anyway_i_am_not_stupid would have been a good idea, does it make more sense for Intel to spend money and time making their competition faster? You'd need to make a lot of assumptions to think that optimizations for one CPU will work well for another, even from the same manufacturer. Besides, doesn't AMD have their own compiler?

  • Re:Portability.. (Score:3, Insightful)

    by thesupraman ( 179040 ) on Friday February 27, 2009 @03:06AM (#27009001)

    Oh, wait a second, I see the problem here.

    You are a moron.

    What exactly do you think happens when GCC changes behavior (as it has done in the past, many times) within the C spec?

    Perhaps we better freeze on version x.y.z of GCC?

    The same would apply to for example assumptions with branch prediction - gcc can and quite probably one day will change behavior - do you really want major features of the kernel to change behavior when this happens?
    The good effect this will have when addressed properly (and remember what you are referencing above is a small group making a starting attempt to achieve this outcome..) is that anything worthwhile AND compiler specific will become clearly marked and optional to the compiling process - therefore increasing the total quality of the kernel. Such assumptions should NEVER be simple spread through the code unmarked.

    By supporting a range of compilers we help make the kernel MORE robust to such changes, and these are both highly competent compilers, so the 'intersection' of features is actually most of the C/C++ specs..

    Of course you obviously have zero experience of such things. You seem to think 'better' means more highly tuned code - try maintaining a major project for more than 6 months, and you may well learn a thing or two.

    pgcc, and more importantly egcs, were the only things that broke the complete stagnation and navel-gazing of gcc that was threatening to cause its death. with the hard work and risk taken by the developers of both, gcc would not be nearly as strong as it is now.

    Again, you dont seem to know what you are talking about, do you perhaps measure compiler 'goodness' by Dhrystone mips?

If all else fails, lower your standards.

Working...