Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Intel Software Linux

High Performance Linux Kernel Project — LinuxDNA 173

Thaidog submits word of a high-performance Linux kernel project called "LinuxDNA," writing "I am heading up a project to get a current kernel version to compile with the Intel ICC compiler and we have finally had success in creating a kernel! All the instructions to compile the kernel are there (geared towards Gentoo, but obviously it can work on any Linux) and it is relatively easy for anyone with the skills to compile a kernel to get it working. We see this as a great project for high performance clusters, gaming and scientific computing. The hopes are to maintain a kernel source along side the current kernel ... the mirror has 2.6.22 on it currently, because there are a few changes after .22 that make compiling a little harder for the average Joe (but not impossible). Here is our first story in Linux Journal."
This discussion has been archived. No new comments can be posted.

High Performance Linux Kernel Project — LinuxDNA

Comments Filter:
  • GCC compatibility (Score:2, Interesting)

    by psergiu ( 67614 )
    Why don't they try to make ICC fully GCC compatible so we can recompile EVERYTHING with ICC and have a 8-9 to 40% performance gain.
    • by NekoXP ( 67564 ) on Thursday February 26, 2009 @06:06PM (#27005395) Homepage

      Compilers shouldn't need to be compatible with each other; code should be written to standards (C99 or so) and Makefiles and configure scripts should weed out the options automatically.

      • Yes! (Score:3, Insightful)

        by Arakageeta ( 671142 )
        I completely agree. I ran into this when I was working as a software architect on a project that had been around for a while. Contracts were required compiler compatibility instead of standard compatibility. It made updates to the dev environment much more complicated. The contracts should have specified standards, but its writers didn't know any better-- the customer had no need to stick to a compiler product/version. It also makes your code more dependent upon the compiler's quirks. I would mod you
      • by Anonymous Coward on Thursday February 26, 2009 @06:46PM (#27005923)

        They should think about moving to a Java kernel. They could just bootstrap one of the new, clever "Just-In-Time" Virtual Machines at powerup.
        These JVMs are able to dynamically optimize the running code in real-time, far beyond what could be achieved by C or C++ compilers, without any performance degradation.
        A Java kernel would likely run at least 50 times faster then the very best hand coded assembler - and since the language is completely type-safe and doesn't implement dangerous legacy language features such as pointers or multiple-inheritance then it would be unlikely to ever crash.

        • by cerberusss ( 660701 ) on Friday February 27, 2009 @05:16AM (#27009773) Journal

          A Java kernel would likely run at least 50 times faster then the very best hand coded assembler

          I'm going to have to agree with you here. However with all the major browser producers concentrating on JavaScript speed recently, I'd say it's much better to use JavaScript instead of plain Java. Think about it, JavaScript is where the speediness is. Also, since almost every browser supports it, you could just boot the kernel using any browser. This could potentially get the kernel out of the hands of that bunch of self-righteous, elitist Linux hackers who are currently totally disconnected from users like you and I.~

      • Re:GCC compatibility (Score:4, Informative)

        by SpazmodeusG ( 1334705 ) on Thursday February 26, 2009 @07:36PM (#27006501)
        And what is the C99 standard to tell the compiler to pack structures with a 1 byte alignment?

        (Hint: there is no standard way)
        • Re:GCC compatibility (Score:5, Informative)

          by NekoXP ( 67564 ) on Thursday February 26, 2009 @08:53PM (#27007317) Homepage

          There isn't one, so what you do is use pragmas (I remember #pragma pack(1)) or attributes (__attribute__((packed)) or something similar.

          Of course they're compiler-specific but there's no reason that code can't be written wrapped in defines or typedefs to stop compiler-specific stuff getting into real production code nested 10 directories down in a codebase with 40,000,000 lines.

          Linux does an okay job of this - but since coders usually reference the compiler manual to use these esoteric pragmas and types, they are usually told "this is specific to GCC" (GCC does a good job of this anyway) so they should be wrapping them by default to help their application be portable and maintainable to future compilers (especially if they change the attribute name or the way it works - as has been done on many a GCC, let alone other compilers).

          What usually nukes it (and why linux-dna has a compiler wrapper) is because they're hardcoding options and doing other weird GCC-specific crap. This is not because they are lazy but because the Linux kernel has a "we use GCC so support that, who gives a crap about other compilers?" development policy and it usually takes some convincing - or a fork, as linux-dna is - to get these patches into mainline.

          • Sure i could put all the compiler directives around #if-#else blocks but how do i handle possible new compilers with new directives that i don't even know about yet?
            Like the Kernel authors i could do everything you say and still have my code break in a compiler that does things differently.

            The only real solution is for compilers to all start doing things in a fairly standard way. Which leads us back to the great-grandparents suggestion...
            • Re:GCC compatibility (Score:4, Interesting)

              by NekoXP ( 67564 ) on Friday February 27, 2009 @12:34AM (#27008553) Homepage

              I find it hard to believe that the Linux kernel developers never heard of ICC. Or, to take another example, never used Codewarrior or XL C (IBM's PPC compiler, especially good for POWER5 and Cell) or DIAB (or Wind River Compiler or whatever they call it now). Or even Visual C++. Personally I've had the pleasure of using them all.. they all do things differently, but when you have a development team which is using more than one.. I once worked on a team where most of the developers had DIAB, but they didn't want to pay for licenses for EVERYONE, so it was just for the team leaders and release engineering guys, so we all got GCC instead. We had to be mindful not to break the release builds.. and the work ethic meant everything went pretty much fine all round.

              All of them have at one time or still today produce much better code and have much better profiling than GCC and are used a lot in industry. If the commercial compiler doesn't do what you want or is too expensive, GCC is your fallback. Linux turns this on it's head because it "wants" to use as much free, GNU software, but I don't think the development process should be so inhibited as to ignore other compilers - especially considering they are generally always far better optimized for an architecture.

              As a side note, it's well known that gcc 2.95.3 generates much better code on a lot of platforms, but some apps out there are refusing to compile with gcc 2.x (I'm looking at rtorrent here.. mainly because it's C++ and gcc 2.x C++ support sucks. This is another reason why commercial compilers are still popular :) and some only build with other versions of gcc, patches flying around to make sure it builds with the vast majority, significant amounts of development time is already "wasted" on compiler differences even on the SAME compiler, so putting ICC or XCC support in there shouldn't be too much of a chore, especially since they are broadly GCC compatible anyway.

              Like the article said, most of the problem, and the reason they have the wrapper, is to nuke certain gcc-specific and arch-specific arguments to the compiler, and the internal code is mostly making sure Linux has those differences implemented. There is a decent white-paper on it here [intel.com]. The notes about ICC being stricter in syntax checking are enlightening. If you write some really slack code, ICC will balk. GCC will happily chug along generating whatever code it likes. It's probably better all round (and might even improve code quality generated by GCC, note the quote about GCC "occasionally" doing the "right" thing when certain keywords are missing) if Linux developers are mindful of these warnings, but as I've said somewhere in this thread, Linux developers need some serious convincing on moving away from GCC (I've even heard a few say "well, you should fix GCC instead", rather than take a patch to fix their code to work in ICC)

              • > I've even heard a few say "well, you should fix GCC instead"

                Well what's wrong with that? If GCC is parsing "bad" code without giving warnings, then GCC should be fixed. The bad code can be fixed to avoid those warnings.

              • by guruevi ( 827432 )

                Why they don't do it:

                1) Historic reasons. The project started small with GCC and the code has been building from there. Through historic code it became increasingly more difficult to use non-GCC compilers.

                2) Compatibility: As no. 1, the project start with GCC and requires a bit of rewriting in the core functions that the people with non-gcc compilers won't always understand. It might also be difficult to find alternatives stuff that isn't in the C/C++ standard but can be done compiler specific. It would bec

      • Re:GCC compatibility (Score:5, Interesting)

        by forkazoo ( 138186 ) <wrosecrans AT gmail DOT com> on Thursday February 26, 2009 @07:58PM (#27006793) Homepage

        Compilers shouldn't need to be compatible with each other; code should be written to standards (C99 or so) and Makefiles and configure scripts should weed out the options automatically.

        Unfortunately, writing an OS inherently requires making use of functionality not addressed in the C standards. If you stick only to behavior well defined by the ISO C standards you *can* *not* write a full kernel. Doing stuff that low level requires occasional ASM, and certainly some stuff dependent on a particular hardware platform. I think that being as compiler-portable as it is hardware-portable should certainly be a goal. The ability to build on as many platforms as possible certainly helps shake out bugs and bad assumptions. But, just saying "clean it up to full C99 compliancy, and don't do anything that causes undefined behavior" would be ignoring the actual reality of the situation, and makes as much sense as porting the whole kernel to Java or Bash scrips.

        • by NekoXP ( 67564 )

          See my other reply on the topic.

          I fully understand the limitations of the C99 standard, but there are also ways to stop your code being tied to a compiler which it seems a lot of coders simply do not bother to use because supporting GCC is their only goal.

    • Re: (Score:3, Insightful)

      by Punto ( 100573 )
      Why don't they improve GCC to have a 8-9 to 40% performance gain? it's not like intel has some kind of secret magical piece of code that lets them have a better compiler.
      • Re:GCC compatibility (Score:4, Informative)

        by forkazoo ( 138186 ) <wrosecrans AT gmail DOT com> on Thursday February 26, 2009 @08:02PM (#27006847) Homepage

        Why don't they improve GCC to have a 8-9 to 40% performance gain? it's not like intel has some kind of secret magical piece of code that lets them have a better compiler.

        To a large extent, they have. ICC really no longer has the performance lead that it once did over gcc. There was absolutely a time when the difference was consistent, and significant. But, a lot has changed since gcc 2.95, when egcs existed. The 4.x branch in particular has been about improving the optimisation capabilities of the compiler. These days, I generally reccomend just going with gcc to anybody who asks me.

      • Personally, I'm waiting for clang to reach feature / compatibility parity with gcc. It should be able to compile code faster than gcc and in many cases produce better optimised binaries. But there is still a lot of work to be done.
    • by Bert64 ( 520050 )

      The performance gap is even bigger on IA64 too...

  • Portability.. (Score:5, Insightful)

    by thesupraman ( 179040 ) on Thursday February 26, 2009 @06:07PM (#27005407)

    IMHO This is a great development, for one important reason.

    Portability of the kernel.

    GCC is a great compiler, but relying on it excessively is a bad thing for the quality of kernel code, the wider range of compilers used, the more portable and robust the code should become.

    I know there will be the usual torrent of its-just-not-open-enough rants, but my reasoning has nothing to do with that, it is simply healthy for the kernel to be compilable across more compilers.

    It also could have interesting implications with respect to the current GCC licensing 'changes' enforcing GPL on the new plugin structures, etc.

    GCC is a wonderful compiler however it has in the past had problems with political motivations rather than technical, and moves like this could help protect against those in the future (some of us still remember the gcc->pgcc->egcs->gcc debarcle).

    Of course no discussion of compilers should happen without also mentioning LLVM, another valuable project.

    • Re:Portability.. (Score:5, Insightful)

      by mrsbrisby ( 60242 ) on Thursday February 26, 2009 @07:11PM (#27006227) Homepage

      GCC is a great compiler, but relying on it excessively is a bad thing for the quality of kernel code ... it is simply healthy for the kernel to be compilable across more compilers.

      Prove it.

      The opposite (relying on GCC is a good thing for code quality) seems obvious to me. The intersection of GCC and ICC is smaller than GCC, so I would assume that targetting something big would afford greater flexibility in expression. As a result, the code would be cleaner, and easier to read.

      Targetting only the intersection of ICC and GCC may result in compromises that confuse or complicate certain algorithms.

      Some examples from the linked application include:

      • removing static from definitions
      • disabling a lot of branch prediction optimizations
      • statically linking closed-source code
      • tainting the kernel making debugging harder

      I cannot fathom why anyone would think these things are "good" or "healthy", and hope you can defend this non-obvious and unsubstantiated claim.

      (some of us still remember the gcc->pgcc->egcs->gcc debarcle).

      When pgcc showed up, it caused lots of stability problems, and there were major distribution releases that made operating a stable Linux system very difficult: 2.96 sucked badly.

      The fact that gcc2 still outperforms gcc4 in a wide variety of scenarios is evidence this wasn't good for technical reasons, and llvm may prove RMS's "political" hesitations right after all.

      I'm not saying gcc4 isn't better overall, and I'm not saying we're not better for being here. I'm saying it's not as clear as you suggest.

      • Try compiling your C Real Mode code in GCC and get back to me.

      • Re: (Score:3, Insightful)

        by thesupraman ( 179040 )

        Oh, wait a second, I see the problem here.

        You are a moron.

        What exactly do you think happens when GCC changes behavior (as it has done in the past, many times) within the C spec?

        Perhaps we better freeze on version x.y.z of GCC?

        The same would apply to for example assumptions with branch prediction - gcc can and quite probably one day will change behavior - do you really want major features of the kernel to change behavior when this happens?
        The good effect this will have when addressed properly (and remember w

        • Re: (Score:3, Informative)

          by walshy007 ( 906710 )

          Oh, wait a second, I see the problem here.
          You are a moron.

          First up, personal attacks to the parent does not an argument prove, all it does is lessen your credibility.

          By supporting a range of compilers we help make the kernel MORE robust to such changes, and these are both highly competent compilers, so the 'intersection' of features is actually most of the C/C++ specs..

          Of course the intersection of features are the specs.. because they are the only standardized thing that makes it c, but as has been said, C leaves a LOT to the implementer in order to be flexible, the standard does not specify everything, and operating systems need to run at such a low level that what they deal with is NOT covered in said specs,

          Furthermore, as for being 'more robust' to breakage whe

  • by whoever57 ( 658626 ) on Thursday February 26, 2009 @06:08PM (#27005429) Journal
    Since all the userland code is still compiled with GCC, what overall performance improvement will this bring?

    Ingo A. Kubblin is quoted as saying:

    "... boost up to 40% for certain kernel parts and an average boost of 8-9% possible"

    is that 8-9% overall speedup of applications, or just kernel tasks?

    • by jd ( 1658 )

      I would imagine that it means for the kernel. We would then need to factor in how much time user applications spend in the kernel. Anything that is I/O-intensive is kernel-intensive. Anything that is malloc-intensive may be kernel-intensive if you're using a VM-based memory pool rather than a pre-allocated one.

      I'm also wondering how this would compare to using Cilk++ and #defining the few keywords it has to the standard keywords when using vanilla GCC or ICC.

      Perhaps there should be a table showing the relat

      • by setagllib ( 753300 ) on Thursday February 26, 2009 @07:22PM (#27006363)

        If your program is malloc-intensive and you care about performance, you may as well just use a memory pool in userland. It is very bad practice to depend upon specific platform optimisations when deciding which optimisations not to perform on your code. Then you move to another operating system like FreeBSD or Solaris and find your assumptions were wrong and you must now implement that optimisation anyway.

      • by Jurily ( 900488 )

        We would then need to factor in how much time user applications spend in the kernel. Anything that is I/O-intensive is kernel-intensive.

        What do you mean? I don't think icc will speed up my hard drive.

        • by jd ( 1658 )

          It won't speed up the hard drive, but it should reduce the latency of a context switch (something like 21 microseconds, isn't it?) and it should also reduce the latency involved in going through the various layers of the kernel.

          Yes, this isn't much in comparison to the speed of the drive, but that's not the point. I didn't say it would speed it up by a lot, merely that it would speed up.

          I don't know what the latency is within the kernel in the VFS layer or within the different filesystems (ignoring mechanic

  • ...and 40% faster in parts. FACTS - give me some context to judge if this is good or bad.

    Looking at Amdahl's law (golden oldie here) how much time does a PC spend on kernel tasks these days?

  • by Anonymous Coward on Thursday February 26, 2009 @06:26PM (#27005667)
    Personally, I am looking forward to the Low Performance Linux Kernel project.

    You see, I'm a consultant and am paid by the hour.

  • compilers? (Score:2, Insightful)

    So GCC is slow compared to the Intel compiler?
    • Does GCC run faster if compiled with ICC?

      That would take the biscuit.

    • Re: (Score:3, Interesting)

      I can't judge because my experience with ICC is minimal. GCC is constantly improving, but I feel it concentrates more on platform support than performance. The GCC team has to work on ARM/MIPS/SPARC/whatever while ICC only need to work on x86.

      So I'm not surprised to see GCC falling behind Intel in x86 performance. In fact, only recently did GCC began to support local variable alignment on the stack, which I think is a basic optimization technique. (See the 4.4 pre-release notes http://gcc.gnu.org/gcc-4.4/ch [gnu.org]

      • Re: (Score:3, Informative)

        by dfn_deux ( 535506 )

        The GCC team has to work on ARM/MIPS/SPARC/whatever while ICC only need to work on x86.

        ICC supports IA-32, Itanium 1 & 2, x86-64, and xscale. Not that it kicks too much of a leg from your argument, but if you are going to argue the point you should at least make it accurate. Ah yeah almost forgot to mention all the extended instruction sets too... SSE, SSE2, SSE3, MMX, MMX2, etc...

    • by Cheapy ( 809643 )

      ICC is better at optimization than GCC.

  • by steveha ( 103154 ) on Thursday February 26, 2009 @06:38PM (#27005817) Homepage

    A few years ago someone figured out that Intel's compiler was engaged in dirty tricks: it inserted code to cause poor performance on hardware that did not have an Intel CPUID.

    http://techreport.com/discussions.x/8547 [techreport.com]

    But perhaps they have cleaned this up before the 10.0 release:

    http://blogs.zdnet.com/Ou/?p=518 [zdnet.com]

    steveha

    • by Jah-Wren Ryel ( 80510 ) on Thursday February 26, 2009 @07:08PM (#27006195)

      A few years ago someone figured out that Intel's compiler was engaged in dirty tricks: it inserted code to cause poor performance on hardware that did not have an Intel CPUID.

      It wasn't necessarily malicious, all the compiler did was default to a "slow but safe" mode on CPUIDs that it did not recognize. Intel's reasoning was that they only tweaked the code for cpus that they had qual'd the compiler against. Seeing as how they were Intel, they were not particularly interested in qualing their compiler against non-Intel chips. In hindsight, what they should have done is add a "I know what I'm doing dammit!" compilation flag that would enable optimizations anyway.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        It was completely intentional. Intel's CPUID protocol defines how to determine the capabilities of a CPU. AMD follows this protocol. Intel could have checked the CPUID for the level of SSEx support, etc. Instead they checked for the "GenuineIntel" string before enabling support for extra instructions that speed up many diverse activities (e.g. copying memory).

        Perhaps your gullibility meter needs recalibration.

        • by Tokerat ( 150341 ) on Friday February 27, 2009 @02:04AM (#27008993) Journal

          Ok I'll bite. By your logic, Intel should:

          • Spend the time and money to test competitors current CPUs against their compiler.
          • Take the blame when their compiler causes unforseen problems on current or newer models due to changes, or aspects they did not know to test for.

          While I agree that something like --optimize_anyway_i_am_not_stupid would have been a good idea, does it make more sense for Intel to spend money and time making their competition faster? You'd need to make a lot of assumptions to think that optimizations for one CPU will work well for another, even from the same manufacturer. Besides, doesn't AMD have their own compiler?

      • by Anonymous Coward on Thursday February 26, 2009 @07:57PM (#27006783)

        It wasn't necessarily malicious

        Like Hell it wasn't. Read this and see if you still believe it wasn't malicious.

        http://yro.slashdot.org/comments.pl?sid=155593&cid=13042922 [slashdot.org]

        Intel put in code to make all non-Intel parts run a byte-by-byte memcpy().

        Intel failed to use Intel's own documented way to detect SSE, but rather enabled SSE only for Intel parts.

        Intel's C compiler is the best you can get (at least if you can trust it). It produces faster code than other compilers. So, clearly the people working on it know what they are doing. How do you explain these skilled experts writing a byte-by-byte memcpy() that was "around 4X slower than even a typical naive assembly memcpy"?

        People hacked the binaries such that the Intel-only code paths would always be taken, and found that the code ran perfectly on AMD parts. How do you then believe Intel's claims that they were only working around problems?

        I'm pissed at Intel about this. You should be too.

        • Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.

          Of course, developers are also free to therefore ignore the compiler, and hence this situation righted itself pretty quickly and naturally.

          I wonder, do you consider RMSs current moves on GCC to also be 'malicious' since it in effect could result in lower performance for end users than is possible, and defined along political lines?

          • Re: (Score:3, Insightful)

            Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.

            Sure, they can do what they want. But it's generally a bad idea to lie about what you've done once you're caught red-handed. You go from losing a lot of respect to nearly all respect in the minds of many customers.

          • by JohnFluxx ( 413620 ) on Friday February 27, 2009 @01:54AM (#27008939)

            > Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.

            ARM put money into GCC. That's far better than them trying to make their own compiler.

            • by tlhIngan ( 30335 )

              Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.

              ARM put money into GCC. That's far better than them trying to make their own compiler.

              ARM may have put money into GCC, but... ARM also sells their own compiler [arm.com] - RealView Development Suite (RVDS).

              As for why ARM would do both - the RVDS is a high-performance ARM compiler and ARM's code, so they'll put a lot of optimizations into it. (Ignore the ARM/GNU compatibility

          • Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.

            This sort of "a company can do anything it wants with its own products" comments appear almost every time someone mentions anti-competitive behavior, and then people explain that no, a company should not be allowed to leverage a monopoly position to further entrench itself. Should the government allow practices like, e.g., dumping, the market would be dominated by a very small number of mega-corporations, ruining the economy.

            Seriously, are you a troll?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Nope, they have not changed that, and I think it is quite bad behavior for Intel.

      However the do _not_ insert _bad_ code. What they do is that they prevent code optimized for the newest Intel CPUs to run on non-Intel CPUs, even if all the used instructions is present. I think -xW (use SSE, SSE2, optimize for Pentium4) is the highest that will run on AMD.

      However in almost all cases the Intel compilers will still produce the fastest binaries on AMD. Not only compared to GCC, but also compared to other commerci

  • I've always wondered if anyone has spent time trying to develop optimizations for the kernel if various specific instruction sets are detected?

      • Both Intel and AMD have contributed code before. You figure if anyone knows how to optimize code for specific processor instruction sets it would be them. It would be a neat way for them to contribute.

      • There are a few spots of the kernel that do use hand crafted SSE assembly (a quick glance says RAID calculation is one area [linux.no] (also here [linux.no]) and a particular crypto routine [linux.no] is another) but it is quite rare. Up until SSE4, SSE was really targeted at multimedia applications that contained a lot of floating point arithmetic. Generally floating point is avoided within the kernel so the maintenance pain of crafting an SSE optimised routine along with generic C version would not be worth it. Seemingly when you go to w

    • by joib ( 70841 )
      The kernel uses lazy context switching for floating point registers. A side-effect of this is that the kernel itself cannot use those registers.
  • Unimpressed with ICC (Score:5, Interesting)

    by Erich ( 151 ) on Thursday February 26, 2009 @08:19PM (#27007033) Homepage Journal
    We tried ICC on our simulator. The result: 8% slower than GCC. On intel chips. Specifying the correct archtiecture to ICC.

    We were not impressed.

    • I call BS, there are cases where GCC can beat ICC, however there are many more where ICC is significantly better.

      My bet, either you are full of BS, or you 'tried' a rather specific and limited codebase.

      I also suspect your codebase was developed under gcc and then just thrown at icc? hmmmm?

      ICC is a VERY impressive compiler, GCC is a quite good compiler. we are lucky to have both (and then a few other options as well).

      • Instead of agressively attacking and answering in generalities ('there are many cases where ICC is beter'), care to explain how you formed your opinion?

      • I suspect you don't realize who you are messing with. 151 [numbergossip.com] is a palindromic prime (but then again so is one of the factors [numbergossip.com] of 179040). Never mind.

        --
        2*7*68213 [mazes.com]
  • This is ancient (Score:2, Insightful)

    by scientus ( 1357317 )

    This kernel is so ancient that any possible performance gains are outweighed by the new kernels performance, bug fixes, and improved driver support. Plus why would someone want to toss away their freedom by using a non-free compiler? Also, does the Intel compiler work with AMD processors?

    There is so much against this that it is useless, until Intel open sources, can work with up to date kernels, and can work on all x86 and x86_64 compatible hardware (im not sure if this is a problem) then im not interested.

    • How can you throw away your freedom compiling free software sourecode on a non-free compiler? That makes no sense at all.

  • This is very relevant to my interests. We'd tried a while back to compile a Linux kernel with ICC and had too numerous issues to list. We do a lot of work with fluid dynamics and it's ALL CPU based - any increase in speed would be appreciated. With the economy the way it is, and a lot of companies shelving projects, budgeting for new clusters isn't on the list of priorities.
    • Re: (Score:2, Informative)

      I'm afraid the boost of kernel code won't help you much. Since you're doing fluid physics, I guess the hotspots are in the floating point math computation, and your code doesn't do context switching often. In that case, kernel speed isn't that important.

      Well, I'm just saying it. I hope I'm wrong :)

      • Re: (Score:3, Insightful)

        by thesupraman ( 179040 )

        It depends, if the system is distributed, the hotspots (ie performance bottlenecks) could quite easily be in network latency and throughput, something that could be reasonably impacted here.

        Of course if its not, you are 100% right, however dont underestimate the proportion of cpu time the kernel spends in some situations (databases and distributed apps, for example).

  • The HPC and gaming communities probably won't care much about this, aside from the tweakers who spend $500 to overclock a $200 CPU to perform like a $400 CPU. The vast majority of workloads spend very little time in the kernel. The glaring exception to this is the network stack, where you can have a lot of rather CPU-intensive packet-mangling, routing, firewalling, IPSEC tunneling, and header processing done entirely in kernelspace. Ever try saturating a 10 Gbit ethernet interface? If you don't do some

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...