Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Porting Linux Software to the IA64 Platform 160

axehind writes "In this Byte.com article, Dr Moshe Bar explains some of the differences between IA32 and IA64. He also explains some things to watch out for when porting applications to the IA64 architecture."
This discussion has been archived. No new comments can be posted.

Porting Linux Software to the IA64 Platform

Comments Filter:
  • Awesome! (Score:3, Funny)

    by Wakko Warner ( 324 ) on Thursday May 16, 2002 @02:43PM (#3532240) Homepage Journal
    Now I, and the other two IA64 users, will have some programs to run on our Linux-64 boxes!

    Can someone please port nethack for us?

    - A.P.
    • Re:Awesome! (Score:3, Informative)

      by rcw-home ( 122017 )
      Can someone please port nethack for us?

      Maybe you could try the patches here [debian.org]?

    • This might be a huge surprise to you, but a very large perentage of Linux apps (over 90%) port to linux ia-64 without any modifications.
      This is largely thanks to the fact that linux already runs on 64-bit architectures -- Alpha, Sparc, etc. and most apps have been adapted to that already. There's not much conceptual difference in the high-level programmer's view between IA64 and any other 64-bit linux platform.
      • by Anonymous Coward
        There's not any conceptual difference in the high-level programmer's view (nor the C programmer's view for that matter) between IA64 and any other POSIX platform, 64-bit or otherwise, either. The code that breaks, unless it's really CPU-specific stuff, breaks because it was coded poorly. Most of the unportable code out there is really unwarranted.
  • by Anonymous Coward
    The major difference between IA32 and IA64 is price.
  • by Anonymous Coward
    IA64 is twice as wide as IA32. Therefore, it will be necessary to remember to halve the size of all variables to compensate in your programs. Additionally, we now have to type twice as much for each command or function. It really sucks that we will no longer see Ms. Portman onscreen in Star Wars anymore. So, in conclusion, nuts to IA64: I'm sticking with my Athlon, thank you very much.
  • by Ed Avis ( 5917 ) <ed@membled.com> on Thursday May 16, 2002 @02:55PM (#3532302) Homepage
    Well obviously what we'll see next is a kernel extension that dynamically 'ports' all your applications to IA-64 and transparently migrates them to IA-64 machines elsewhere in the cluster. When Intel's next Great Leap Forward is released, you'll be able to transparently migrate to that as well. In fact it will be so transparent, you won't notice any difference and you can continue working at your 80286-based machine without any interruption.
    • Last summer at the London Linux Expo I asked this HP reseller (who had a big itanic display) "what about legacy code?"

      He replied,"16-bit code?"

      I sighed and moved on...
      • Karma 39 and still posting at 0.

        How do you do that? It'd be great for off-topic posts like this one (that should be modded to 0 anyway)
    • Unlikely.

      Migration of a running process, even when going between identical processors, is expensive. Going even to a similar processor would be more so. (And going from, say, a Sparc to a m68k is totally out of the question, not that you're suggesting that.)

      It's *really* hard to justify a policy of process migration in a cluster except with extremely long-running, massivley-parallel jobs. For most stuff, you'll waste less time just letting it finish. (GLUnix [berkeley.edu] does do process migration. Note that when you come back to a workstation that's been horfed by GLUnix, you'll be waiting about two minutes before you get your UI back.)

      As for *starting* IA32 binaries on an IA64 processor, that's doable, but most cross-platform clustering systems function by keeping binaries for all their constituent processor types and having a hacked shell to convert PATH to the architecture-dependent path. (And by "most cross-platform clustering systems", I mean most that have been designed, since I know of none that work.)
    • I don't think automatic porting would be possible.

      You'd have to have a compiler that was smart enough to recognise when a pointer was cast to an int and then instead cast it to a long.

      But now your code has changed, instead of a variable being an int, it's now a long - this is bound to cause problems elsewhere in your code !

      I learnt this lesson long ago when somebody tried to compile a C program I'd written on an Alpha machine, and it complained about casting pointers to ints (I'd wrongly assumed pointers and ints would be the same size on every architecture).

      What I do now is to typedef a pointer_t which can be either int or long, and make sure to use that everywhere pointer arithmetic is required.

      • What I do now is to typedef a pointer_t which can be either int or long, and make sure to use that everywhere pointer arithmetic is required.

        Auch.

        First: Keep pointers in pointer variables. Try not to cast them back and forth to integer variables.

        If you have to, use longs. The C standard requires a long to be able to hold a pointer.

        Roger.
      • I learnt this lesson long ago when somebody tried to compile a C program I'd written on an Alpha machine, and it complained about casting pointers to ints (I'd wrongly assumed pointers and ints would be the same size on every architecture).
        This is a really easy problem to solve: follow the POSIX standards on type names. If you want a u_int32_t, say so. If you want a pointer-to-something, declear it that way, rather than trying to stuff it in some architecture-dependently-sized variable.

        (Note that NetBSD's code is primiarly arch-independent--the dependent stuff is mostly hardware initialization--and it compiles just fine on quite a wide array of processor architectures.)
  • I figured this would be coming now, I just started my first Assembly class as a CS undergrad, a whole new group of registers to memorise!

    • I sincerely hope that your assembly class won't be using x86 (or any derivative thereof)... a nice 6811/68332/PowerPC would be far more useful as a learning tool without the cruft... PowerPC assembly is actually fun...
      • The Instructor actually brought that up the first day, he said in the past there has been demand for a PPC version of the class, but since each platform has it's own unique instruction set there would be no overlap and you would just have to learn the language all over again for Intel coding. They mostly go on the demand of the market, (they just switched their main taught language from c/c++ to java) so they pretty much just teach the x86 version now.

        I would have to agree with them that there would be alot more demand for someone programming assembly on an Intel box then on a Mac.

        I would say what we have done so far is fun though. Any programming can be if you make it.
        • by Anonymous Coward
          So, you learn x86 assmebly and Java. I guess you'll do XOR in the long run, eh?
        • True, it can all be fun. I think the register set of the PPC lends itself to some more creative solutions to some problems, and when you look at low level programming in assembly, much of the work is in the embedded space, where there are a *ton* of PowerPCs (and Motorola chips). I wasn't thinking as much about the PC/Mac situation.
        • When I was in college, the only assembly programming we did was for MIPS. For our compiler project, we originally put out MIPS assembly and then retargeted it for the Sparc. I never once had to do any x86 assembly in school.

          There's really not that much demand for any assembly in the industry at large. Even microcode is being done in high-level languages these days. I would wager that most of the people doing assembly coding now are in highly specialized fields, especially embedded programming. So, there isn't necessarily any more demand for x86 assembly programmers than for any other (possibly non-standard) architecture. In my opinion (and this is only opinion), while you should learn an assembly language in school to understand the basic building blocks, the choice of architecture isn't crucial. However, since it's not crucial to learn one or the other, I think they should stick with a simple one. x86 is kind of a mess; MIPS was easy to learn. As far as access to the hardware goes, there are simulators for most processors, which is sufficient for education.

          -J

          • I would wager that most of the people doing assembly coding now are in highly specialized fields, especially embedded programming.

            As an embedded systems designer I can tell you that even here in the embedded world, assembly x86 is nowhere to be found, except for maybe in the lowlevel init. Even there, though, it's used to get the environment ready for C and calls a C function to start all the real work, very much in the same manner as the Linux kernel source shows.

            Assembly programming is everywhere in the embedded world, just not x86 or anything powerful enough to be able to use a C compiler. I routinely do large Microchip PIC [microchip.com] systems entirely in assembler, but that's only because of one of two reasons: they're not suited for C (the 18Cxxx is a different story now), or I need every last word of program and data space.

        • I'm taking an assembler class myself right now. The real point of taking an assembler class anymore is to help you understand how computers work at a lower level, so you make better decisions programming in a higher level language. Very few programs should need to have asm anymore. Even linux kernel drivers are mostly written in C.

          The x86 is and odd choice if that's the goal, because it just kludge upon kludge trying to make an 8 bit processor be 16 bit, then 32, and now 64. I don't know any x86 asm, but it is rather wonky and makes you jump through some hoops as I am told.

          At OSU we are learning SPARC asm. When Sun went from 32 to 64 bit I think that for the most part they just had to change all the register sizes to 64 bit, because it was designed with the future a little bit more in mind than the x86.I'm just taking a really basic class (it's actually called "Introduction to Computer Systems"), so we aren't going to deal with things like the differences between a SPARC and UltraSPARC, but like I said it is apparently an easy transition. I'd imagine that the PPC is probably easy too. (Both are 32bit bigendian with the possibility of 64bit in the future designed in, I think)
  • Isn't that the instruction set of the Itanium processor that isn't selling worth crap? I was under the impression that intel was going to eventually drop (or push to a back burner) support for this and go with x86-64 (the AMD 64 bit architecture being rolled out with the Opteron.)
    • Hardly. HP and Intel are pushing full speed ahead with these. Supposedly there will be commercial systems by the end of the year. Also if IA64 was to be pushed back, Intel would likely switch to its own 386-64 architecture, currently codenamed Yamhill, if I recall.
    • The current generation of IA64 is not really meant for the general public. It is useful only for early adopters (that is developers). We'll be able to tell
      whether IA64 succeeded or not a few years down the
      road when it is somewhere in its third generation..
    • Intel hasn't made any announcements about their Yamhill, and HPQ still seems to think that IA64 is a go. The new(!) Itanium II is supposed to make this pathetic architecture up to 50% faster. Then it will have integer op performance comparible to today's fastest Celeron.

      Look for Sun and/or IBM to be selling 8-way Hammer machines by this time next year, according to my Spirit Guides.

  • size_t (Score:2, Informative)

    by $pacemold ( 248347 )
    Oh please.

    return (char *) ((((long) cp) + 15) & ~15);

    is not portable.

    return (char *) ((((size_t) cp) + 15) & ~15);

    is much better.
    • Re:size_t (Score:1, Informative)

      by morbid ( 4258 )
      Sad isn't it?
      What he doesn't mention, is that most Linux people have gcc, and last time I looked, the object code produced by gcc on IA64 was +20% of the speed of the intel compiler. This isn't a criticism of gcc, it's just that the IA64 arch. is so different that you absolutely _must_ have the intel compiler to get any performance out of it.
      • This is the case on every architecture, gcc massively underperforms compared to a vendor compiler.. x86 is the architecture where the difference is the smallest, and its still significant.
        • Not any longer, with icc 6 the margin is now the same on x86 as on most platforms.
          Ofcouse the largest margin is on the alpha platform where cxx outperforms gcc by 4x til 5x on floatingpoint and 2x-3x on integer. Ouch!
    • Actually, you probably want to use ptrdiff_t
      • Actually, you probably want to use ptrdiff_t

        No. ptrdiff_t is a signed type. cp is a pointer, and hence an unsigned type. size_t is the correct type to use for the typecast.

  • by binaryDigit ( 557647 ) on Thursday May 16, 2002 @03:04PM (#3532344)
    Ah, porting to homogeneous isa but with a bigger word size. Funny how it's the same old issues over and over again. Structs change in size, bad assumptions about the size of things such as size_t, sizeof(void *) != sizeof(int) (though sizeof(void *) == sizeof(long) seems to be pretty good at holding true here), etc. Of course now there are concerns about misaligned memory accesses, which on IA32 was just a performance hit. Most IA32 types are not used to being forced to be concerned about this (of course many *NIX/RISC types are very used to this).

    When things were shifting from 16 to 32 bit (seems like just yesterday, oh wait, for M$ it was just yesterday), we had pretty much the same issues. Never had to do any 8 -> 16bit ports (since pretty much everything was either in BASIC, where it didn't matter, or assembler, which you couldn't "port" anyway).

    Speaking of assembler, I guess the days of hand crafting code out of assembler is really going to take a hit if IA64 ever takes off. The assembler code would be so tied to a specific rev of EPIC, that it would be hard to justify the future expense of doing so. It would be interesting to see what type of tools are available for the assembler developer. Does the chip provide any enhanced debugging capabilities (keeping writes straight at a particular point in execution, can you see speculative writes too?). It'd be cool if the assembler IDE could automagically group parallelizable (is that a word?) together as you are coding.
    • by CFN ( 114345 ) on Thursday May 16, 2002 @03:38PM (#3532523)
      Well, they days of hand crafted assembly, except for a few special purposes, have long since past. And no one expects assembly writers to be competitive with the compiler's ability to discover and explot ILP.

      But the example you mention won't actually cause assembly writers any problems: the code won't be tied to a specific version of EPIC.

      The IA-64 assembly contains so-called "stop bits", which specify that the instruction(s) following the bit cannot be run in parallel with those before the bit.
      Those bits have nothing to do with the actual number of instructions that the machine is capable of handling.
      For example, if a program consisted of 100 independent instructions, the assembly would not contain any stop bits. Now the actual machine implementation might only handle 2 or 4 or 8 instructions at a time, but that does not appear anywhere in the assembly. The only requirement is that the machine respect the stop bits.

      Now, you might question how it deals with load-value dependencies (ie. load a value into a register, use that register). Obviously, the load and use must be on different sides of a stop bit, but that would still not guarantee correctness. I'm not sure how IA64 actually works (and someone should reply with the real answer) but I imagine that either: a) loads have a fixed max latency, and the compiler is required to insert as many stop bits between the load and the use to ensure correctness, or b) the machine will stall (like current machines).

      Either way, the whole point of speculative loads is to avoid that being a problem.
      • Actually my point was that for anyone to code in assembler usually implies coding for max performance therefore you would maximize the number of parallel instructions for the particular version of EPIC you were targeting. That in turn would make your code either non portable (going down in # of EU's) or non optimized (going up in # of EU's).

        I too would be interested in hearing about how the cpu handles the dependencies. The only modern "general purpose" cpu that I know of that _doesn't_ stall is the MIPS.
        • It handles dependencies by stalling, if necessary. A true VLIW wouldn't do this. Intel took a whole bunch of good architecture ideas, and then tied themselves to a wall with requirements for compatibility and a desire to shove more features into the package.

          You'd think that starting out with a new architecture/ISA, they'd at least try to keep it simple and then let it grow hairy with age. :)i
      • The compiler most certainly can be beaten, even more so today than in the past. I haven't done much asm programming on RISC machines, but on x86, the stuff the compiler puts out is generally garbage. If you're using GCC, it's trivial to beat, as it doesn't know how to deal with the extreme lack of registers on the x86. GCC is constantly going to memory when, with some rearangement, it's possible to keep many more things in registers. The Intel compiler is much better, but it still isn't hard to beat.

        Furthermore, with the SIMD stuff in the newer x86 processors (MMX, SSE, SSE2), an asm programmer can get huge speedups which the compiler just doesn't know how to exploit. The Intel compiler will use these features in some instances, but far from optimally. Mind you, you have to know the processor well, and for the big wins, you have to optimize for a specific processor, but if you're doing computationally intensive stuff, the gains can be huge.
        • Umm, we are talking about IA64 here... Have a look at the manual, you might then understand.
    • Funny how it's the same old issues over and over again. Structs change in size, bad assumptions about the size of things such as size_t, sizeof(void *) != sizeof(int) (though sizeof(void *) == sizeof(long) seems to be pretty good at holding true here), etc.

      But did you notice that on Windows/IA64, even that won't work? They have a "strange P64 model", where ints and longs stay 32 bits and only pointers are 64 bits. So this kind of thing isn't even homogeneous within the architecture (the Windows guys will have to use long longs or _int64_t's explicitly, I guess).

  • by Ed Avis ( 5917 ) <ed@membled.com> on Thursday May 16, 2002 @03:07PM (#3532355) Homepage
    From the article:
    Back in the early '80s, nobody at Intel thought their microprocessors would one day be used for servers; the inherent architecture of the i386 family shows that clearly.
    That's funny, I thought that the i386 was specifically designed to run MULTICS, which was the very definition of a 'server' operating system (computing power as a utility, like water and electricity). The early 80s was the time Intel designed the i386 wasn't it?
    • What's really funny is that I have an Intel propoganda book for the "brand new 80386." It spends two whole chapters talking about how the 386 is the perfect CPU for LAN servers. Of course, it also had to spend almost that much space describing what a LAN is and what a server might do, since very few people had ever heard of a LAN at that point, much less had one.

    • Yeah, and the early 80's was also when Honeywell stopped Multics development. FWIW, I'd describe MULTICS as a "timesharing" OS, rather than "server", which to me implies "client".
    • 386 designed for Multics? I doubt it. Running multics on a 386 would be like scoring Beethoven's ninth for a kazoo.


      Multics was pretty much tied to it's unique mainframe hardware with loads more weird addressing and virtual memory management features that would never have fit the paltry 275,000 transitors of the 80386. Also, at the time (1985) Multics was a legacy system; Unix was seen the operating system of the future, in particular because it was portable to microprocessors and didn't require much special hardware.

      • Don't forget about the 32 rings of protection on the Honneywell hardware instead of the 4 found on i386. As long as you have 2 rings, you can emulate as many rings as you want, but it's slow and a PITA. MULTICS was a beast of an OS that need a beast of a machine. Thank G_d your classic arcade machines didn't run MULTICS, or it'd take decades to write a good efficient emulator to run them.
  • Debian on the IA64 (Score:5, Informative)

    by hereward_Cooper ( 520836 ) on Thursday May 16, 2002 @03:08PM (#3532364) Homepage
    Debian is already ported to the IA64 -- not sure about the number of packages ported yet, but I know they intend to release the new 3.0 (woody) with a IA64 port.

    See here [debian.org] for more details
    • by BacOs ( 33082 )
      From #debian-ia64 on irc.openprojects.net [openprojects.net]:

      Topic for #debian-ia64 is 95.70% up-to-date, 96.07% if also counting uploaded pkgs

      There are over 8000 packages for i386 (the most up to date architecture) - ia64 currently has about 7650 or so packages built

      More stats are available at buildd.debian.org/stats/ [debian.org]
      • Maybe I'm wrong here, but I would think that getting compiler optimizations right are going to matter a lot more than just getting everything under the sun to build on IA64. And, the question on my mind is, will that happen BEFORE ia64 systems reach the market or not?
  • by morbid ( 4258 ) on Thursday May 16, 2002 @03:10PM (#3532373) Journal
    In the article he mentions that itanic can execute IA32 code _and_ PA-RISC code natively, as well as its own, but these features will be taken away sometime in the future.
    Does anyone remember the leaked benchmarks that showed the itanic executing IA32 code at roughly 10% of the speed of an equivalently-clocked PIII?
    I wonder how it shapes up on PA-RISC performance?
    It has to offer some sort of advantage over existing chips, or no one will buy it.
    On the other hand, maybe its tremendous heat dissipation will reduce drastically when they remove all that circuitry for running IA32 and PA-RISC code.
    Which leads me to think, why didn't they invest the time and money in software technology like dynamic recompilation, which Apple did very successfully when they made the transition from 69k to PPC?
    • In the current Itanium, only user-space IA-32 instructions are implemented with hardware assistance. Since this is essentially microcode, this is not too fast. The architecture specifies how the instructions work, which IA-64 registers they use to store IA-32 registers, etc. But the whole thing can be implemented in firmware or software in future revisions of the chip.

      IA-64 machines also offer firmware emulation of IA-32 system instructions. This allows you, in theory, to boot an unmodified IA-32 OS. I've never used it myself, however.

      Last, the PA-RISC support is a piece of software integrated in HP-UX. There's no help from the hardware, except numerous design similarities (IA-64 began its life as HP PA-Wide Word). So you won't be able to run PA-RISC Linux binaries on IA-64 Linux any time soon...

    • Actually, the IA-64 instruction set is based off of PA-RISC, as it is the next generation of that architecture. Various projects designing processors with high levels of ILP were conducted at HP, blooming into the partnership between HP and Intel (who had been floating around an idea of a 64-bit x86 architecture, but recieved poor supportive responces) that created IA-64. HP-UX developers have stated that only minor changes must occur to port an application, and have created what equates to a shell process that converts a PA-RISC instruction directly into its IA-64 counterpart.

      So, PA-RISC is native via design. The x86 instructions were tacked on, origionally supposed to be an entire processor but proved to be to costly. You have to remember that x86 is hardly needed, as its mostly important for developers porting and testing applications, and for Microsoft to run 'legacy' applications. McKinly has a newer design that should boost the x86 performance substantially. If extra is needed, I'm sure something similar to Sun's x86 PCI card will be devised.

      As to heat and the rest, taking out the x86 would help of course. From what I've heard, the control logic on current IA-64 chips is actually smaller then that of the Pentium 4, which was the point of the architecture - simplify. Simplifying meant spending more time on higher level logic rather OOO techniques, etc that could be done via software. The chip is so large due to *lots* of cache.

      Anyways, a few good links are:
      here [209.67.253.150] and here [clemson.edu].
  • by Ed Avis ( 5917 ) <ed@membled.com> on Thursday May 16, 2002 @03:13PM (#3532385) Homepage
    From the article:
    Quite obviously, inline assembly must be rewritten from scratch.

    I don't see what is so obvious - isn't one of the selling points of Itanium its backward i386 compatibility? Even if running the 64-bit version of Linux it should still be possible to switch the processor into i386-compatible mode to execute some 386 opcodes and then back again. After all, the claim is that old Linux/i386 binaries will continue to work. Or is there some factor that means the choice of 32 bit vs 64 bit code must be made process-by-process?

    Interesting question: which would run faster, hand-optimized i386 code running under emulation on an Itanium, or native IA-64 code produced by gcc? They say that writing a decent IA-64 compiler is difficult, and I'm sure Intel has put a lot of work into making the backwards compatibility perform at a reasonable speed (if not quite as fast as a P4 at the same clock).

    • by NanoGator ( 522640 ) on Thursday May 16, 2002 @03:29PM (#3532471) Homepage Journal
      " isn't one of the selling points of Itanium its backward i386 compatibility?"

      If I remember clearly, the 386 instructions are interpreted instead of being on the chip. That means that those instructions will execute alot slower. It would work, but it wouldnt work well. Its nice because you could transition to IA 64 now and wait for the new software to arrive.

      Personally, I dont think that selling point is that worthwhile, but Ill let Intel do their marketing without me.

    • > I'm sure Intel has put a lot of work into making the backwards compatibility perform at a reasonable speed

      :)

      Look up what happened when:

      1. 80286 was emulating 8086 in protected mode
      2. Pentium Pro was running 16-bit code
      • The PPro sucked at runnin 16 bit code - because at the time it was designed Intel didn't anticipate that people would _still_ be running 16-bit stuff in the mid-90s - but the next iteration the Pentium II was better. I wonder if McKinley is expected to give a boost to legacy code compared to Itanic.
    • It is an interesting comparison to look what Digital did to get people from the VAX to the Alpha. The had a sophisticated binary translator and for low level code where you the source, VAX Assembler can be compiled.

      The end result is that it easn't to difficult to move architectures, even though the Alpha does not know the VAX instruction set and no interpreter was provided.

      The only gotcha is that Digital had to provide some special extra instructions to implement some primitives used by the OS, such as interlocked queues.

      Intel is primarily a hardware company so they would tend to ignore software solutions, but the one-architecture approach kept the Alpha from getting too complicated.

    • Changing modes for a single assembly block is not going to work. All of your data is in IA-64 registers, the processor pipeline is filled with IA-64 instructions, and so forth. Switching is a major slowdown (might as well be another process), and the point of having sections in assembly is to speed up critical sections.

      In any case, what makes it difficult to write an IA-64 compiler is taking advantage of the things that the new instruction set lets you tell the processor. It's not hard to write code for the IA64 that's as good as some code for the i386. It's just that you won't get the benefits of the new architecture until you write better code, and the processors aren't optimized for running code that doesn't take advantage of the architecture.
      • If the entire critical loop is in assembler (not just a small part of it) then it could be worth switching. Although based on what another poster wrote, it sounds like the emulation is so lousy that no matter how suboptimal gcc's code generation
    • by Chris Burke ( 6130 ) on Thursday May 16, 2002 @04:26PM (#3532838) Homepage
      I don't see what is so obvious - isn't one of the selling points of Itanium its backward i386 compatibility?

      Yes. Compatability. Nothing more. Your old apps will run, but not fast. It's basically a bullet point to try to make the transition to Itanium sound more palatable.

      Or is there some factor that means the choice of 32 bit vs 64 bit code must be made process-by-process?

      It is highly likely that the procedure to change from 64 to 32 bit mode is a privileged operation, meaning you need operating system intervention. Which means the operating system would have to provide an interface for user code to switch modes, just so a small block of inline assembly can be executed. I highly doubt such an interface exists (ick... IA-64 specific syscalls).

      Interesting question: which would run faster, hand-optimized i386 code running under emulation on an Itanium, or native IA-64 code produced by gcc?

      An interesting question, but one for which the answer is clear: gcc will be faster, and by a lot. Itanium is horrible at 32-bit code. It isn't designed for it, it has to emulate it, and it stinks a lot at it.

      They say that writing a decent IA-64 compiler is difficult, and I'm sure Intel has put a lot of work into making the backwards compatibility perform at a reasonable speed (if not quite as fast as a P4 at the same clock).

      Writing the compiler is difficult, but a surmountable task. And your surety does not enhance IA-64 32-bit support in any way. It is quite poor, well behind a P4 at the same clock, and of course at a much lower clock. Even with a highly sub-optimal compiler and the top-notch x86 assembly, you're better off going native on Itanium.

    • isn't one of the selling points of Itanium its backward i386 compatibility

      The article was referring to inline assembly in the kernel code. The IA32 compatibility built into the IA64 CPU is strictly for user mode, all system functions are executed in IA64 mode. Although it would be technically possible to enter kernel mode, swith to the IA32 instruction set, exec some IA32 code and then swith back, in practice this is unfeasible. The IA32 code would be using different data structures and it couldn't call any of the kernel internal routines with somehow finding a way to swith from IA32 to IA64 mode and back on each subroutine call.


      The problems of mixing IA32 and IA64 code, especially inside the kernel, are just too difficult and provide little benefit. For these reasons the Linux/IA64 team decided not to support this.

    • Here's some benchmarks [tweakers.net] of a 666 MHz Itanium on x86 code. About as good as a Pentium 100. Not exactly a compelling reason to buy a thousand dollar chip...
  • NULL barfage (Score:3, Informative)

    by dark-nl ( 568618 ) <dark@xs4all.nl> on Thursday May 16, 2002 @03:36PM (#3532507)
    The examples he gives for usage of null pointers are both wrong. When a null pointer (whether written as 0 or NULL) is passed to a varargs function, it should be cast to a pointer of the appropriate type. See the comp.lang.c faq [eskimo.com] for details. The relevant questions are 5.4 and 5.6. But feel free to read them all!
    • The examples he gives for usage of null pointers are both wrong. When a null pointer (whether written as 0 or NULL) is passed to a varargs function, it should be cast to a pointer of the appropriate type.


      Indeed. In the particular case in question, passing a pointer to printf(), this should be (void *) 0 or (void *) NULL.

      At least he's right when he says "The following is coded wrong." :-)

      Bar is also mistaken on at least one other ANSI/ISO C-related point. He writes:

      values of type size_t should use the Z size modifier [to printf], like so:


      In fact, the Z modifier in the %Zu construction is non-standard. There was no portable way to print a size_t in the original ANSI/ISO C (C89). C99 (the 1999 revision of the ISO C standard) uses a lower-case z instead, so portable code should use %zu instead. Of course, the kernel is intended for compilation with gcc, not just any compiler, so Bar's example is correct for the kernel but is not (as he claims) standard.

  • by jmv ( 93421 ) on Thursday May 16, 2002 @03:39PM (#3532531) Homepage
    A while ago, I tried compiling and running my program (http://freespeech.sourceforge.net/overflow.html) on a Linux PPC machine and (to my surprise) everything went fine. Does that mean that it should work on ia64 too since (AFAIK) both are big-endian 64-bit architectures?
    • No, because as he says in the article, IA64 is little endian.
    • Whatchew talkin' 'bout, Willis?

      PowerPC is 32-bit and IA64 is little endian.

      Duh?
      • PowerPC is 32-bit and IA64 is little endian.

        (After a quick check) It does seem like the PowerPC is a 64-bit chip (though maybe linux uses it as a 32-bit for some operations). Also, both PPC and Itanium can act like big-endian or little-endian.
        • Actually, the poster was right - the regular PowerPC is 32-bit.

          From http://penguinppc.org/intro.shtml:
          There are actually two separate ports of Linux to PowerPC: 32-bit and 64-bit. Most PowerPC cpus are 32-bit processors and thus run the 32-bit PowerPC/Linux kernel. 64-bit PowerPC cpus are currently only found in IBM's eServer pSeries and iSeries machines. The smaller 64-bit pSeries and iSeries machines can run the 32-bit kernel, using the cpu in 32-bit mode. This web page concentrates primarily on the 32-bit kernel. See the ppc64 site for details of the 64-bit kernel port.
        • Extremely few PowerPC processors are 32-bit.

          Certainly none you're likely to be compiling software on with any kind of regularity. (By which I mean: Apple's never sold a 64-bit processor. ;^>)
  • When I was reading the article, the part about no Floating Point in the Kernel stuck out for me. Is this an absolute, or a "don't do it, it's bad"? I looked at the Mossberg presentation on the IA-64 kernel and it looked like they were using some of the fp registers for internal state, but it didn't look like all of them.

    Derek
    • It's for performance reasons I guess - NetBSD does the same for quite a few of its ports (e.g. the m68k and powerpc ones). The kernel does nearly no floating point calculations and if you do the few ones the kernel does with soft-float to avoid using the floating point instructions you manage to keep the contents of the FP registers unchanged. So there's no need to save and restore them when the CPU switches between user and kernel mode (syscalls etc.). Storing/loading n floating point registers in memory for every syscall is quite expensive, you know.

      This of course is not necessary if the CPU has two (at least partly) different sets of FP registers for kernel (supervisor, privileged, ...) and user (unprivileged) mode or an instruction to quickly exchange (parts of) the FP register sets. (SPARCs do have this to some degree with its concept of register windows).
    • Re:No FP in kernel? (Score:4, Informative)

      by descubes ( 35093 ) on Thursday May 16, 2002 @04:38PM (#3532917) Homepage
      There are two reasons:

      1/ The massive amount of FP state in IA-64 (128 FP registers). So the linux kernel is compiled in such a way that only some FP registers can be used by the compiler. This means that on kernel entry and exit, only those FP registers need to be saved/restored. Also, by software conventions, these FP registers are "scratch" (modified by a call), so the kernel needs not save/restore them on a system call (which is seen as a call by the user code)

      2/ The "software assist" for some FP operations. For instance, the FP divide and square root are not completely implemented in hardware (it's actually dependent on the particular IA-64 implementation, so future chips may implement it). For corner cases such as overflow, underflow, infinites, etc, the processor traps ("floating-point software assist" or FPSWA trap). The IA-64 Linux kernel designers decided to not support FPSWA from the kernel itself, which means that you can't do a FP divide in the kernel. I suspect this is what is more problematic for the application in question (load balancer doing FP computations, probably has some divides in there...)

      XL: Programming in the large [sf.net]
    • The kernel interrupt handler don't bother to save the state of the FP registers, mainly for performance reasons. That means if you use FP in the kernel you'll probably fubar any user-space process that's using the FPU.


      It's not specific to IA64 or Linux-- PPC and IA32 also work this way, and Windows does the same thing. You can get around it, possibly, by inlining some assembly which saves and restores the FP registers before and after you use them. You need to be careful that the kernel won't switch out of context or go back to userland while you're using FP registers--preemptive kernels make this much harder.


      However, there really aren't many reasons why you would want to use FP in the kernel in the first place. Real-time data acquisition and signal processing is the only example that comes to mind, but you'd be better off using something like RTLinux in that case.

  • by 00_NOP ( 559413 ) on Thursday May 16, 2002 @04:13PM (#3532735) Homepage
    When I started messing about with computers 8 bit chips were stanard on the desktop and 4 bit in the embedded sphere.

    Within four years 16 bit was the emerging standard for the desktop and four more than that 32 bit was emerging.

    In the 12 years since then, well...

    32 bit rules in both the desktop world and in the embedded world. Can someone tell me why we aren't on 128 bit chips or more by now? Why do 64 bit chips not amke it - is this a problem of the physics of mobos or what?
    • by Chris Burke ( 6130 ) on Thursday May 16, 2002 @05:12PM (#3533114) Homepage
      It's really not that complicated.

      While 4-bit and 8-bit chips were cool and all, no one really thought they were -sufficient-. The limitations of an 8-bit machine hit in you in the face, even if you're coding fairly simple stuff. 16 bits was better but, despite an oft quoted presumption suggesting otherwise, that as well was clearly not going to work for too long.

      Then, 32 bits came around. With 32-bit machines, it was natural to work with up to around 4 GB of memory without any crude hacks. Doing arithmetic on fairly large numbers wasn't difficult either. The limitations of the machine were suddenly a lot farther away. Thus it took longer for those limitations to become a problem. You'll notice that for those spaces where 4GB was a limiting factor the switch to 64 bits happened a long time ago. The reason we are hearing so much about 64 bits now is that the "low end" servers that run on the commodity x86 architecture are getting to the point where 4GB isn't enough anymore. Eventually I imagine desktops will want 64 bits as well. I've already got 1.5GB in the workstation I'm typing this on.

      When will 128 bit chips come about? I don't know, but I'm sure it will take longer than it will take for 64 bits to become mainstream. The reason is simple: Exponential growth. Super-exponential, in a way. 64 bits isn't twice as big as 32 bits, it's 2^32 times bigger. While 2^32 was quite a bit of ram, 2^64 is really, really huge. I won't say that we'll never need more than 2^64 bytes of memory, but I feel confident it won't be any time soon.

      An interesting end to this: At some point, there -is- a maximum bit size. For some generation n with a bit size 2^n and a maximum memory space of 2^2^n you have reached the point where you could use the quantum state of every particle in the universe to store your data, and still have more than enough bits to address it. Though this won't hold true if, say, we discover that there are an infinite number of universes (that we can use to store more data). Heh.
    • I have my doubts about the success of 64-bit chips, too (sounds like we both started messing around with 8080's and 4040's :-)

      The only thing that I see the 64-bit architecture getting you is more addressable memory (2^64 vs. 2^32). Most large scale systems these days are highly distributed; you throw lots of CPUs at the problem. You don't throw a large memory space at the problem.

      You don't need a 64-bit processor to for the instruction size. RISC uses less, not more.

      Of course, there are advantages to parallelization of the instruction pipeline, but multi-processor systems or vector processing units (Altivec rocks) are better at this.

      I remember being involved in an early port on a 64-bit DEC Alpha. It was a pain in the butt and the performance gain wasn't enough to justify the expense.

      -ch
      • It is true that 64 bits really only brings to the table is more addressable memory. That's an entirely adequate reason for its adoption.

        Most large scale systems do have a large number of processing nodes, but each node needs to be able to access a large amount of data easily. 4GB isn't that much memory, even for one node. Besides, for inter-node communication, a unified memory space is the easiest method by far. For large multi-way servers (as opposed to Beowulf-style) this is also quite natural.

        64 bits is a good thing. It's already a success. It's only in the low-end server and desktop markets where it still hasn't taken over.
        • I hadn't thought about using the address space as a way of communicating between nodes. It certainly puts a new and interesting twist on larger systems...

          Still, I don't see 64-bit systems working their way into the lower end of the market anytime soon. I distribute my work across many machines. It's cheap and easy to have lots of servers...

          -ch
          • I hadn't thought about using the address space as a way of communicating between nodes. It certainly puts a new and interesting twist on larger systems...

            My reaction depends on how you define "node" or "larger system". I think of a "node" as a small processing unit of a few processors connected by a router to other nodes in some form of network. But the node need not be a standalone machine, and the network need not be cat-5. I think of a "large system" as more of the massively MP systems like mainframes and such that can have a thousand processors in one (big) box.

            Anyway, there are generally two methods of doing IPC in such a system: shared memory, and message passing. While message passing has its advantages, it is much more difficult to program for. Shared memory is simple.

            If you don't consider 1024-way mainframes "low end" (heh), the same argument is true in a 2-way server. You have two processors, each of which may need to access more than 4 GB of memory,

            Still, I don't see 64-bit systems working their way into the lower end of the market anytime soon. I distribute my work across many machines. It's cheap and easy to have lots of servers...

            As soon as your database gets larger than 4GB, you'll want 64 bits. Maybe your front end web server won't need it, but your back end will. And your back end may still be considered "low end".

            But you are right, in that it won't be a fast transition. 4GB is still a lot to a lot of people. But that number will inevetiably decrease, and perhaps as more 64 bit applications come online in the low-end world sometimes known as wintel that will drive the change faster.
      • Yes they will. If only because you can make Linux use workarounds like high-mem (instead of mapping in all physical memory) for only $200 on pricewatch.
      • (sounds like we both started messing around with 8080's and 4040's :-)

        Z80s actually. Was amazed to see that they are still in use - in Gameboys (though that too has gone 32bit now)
    • 32 bit rules in both the desktop world and in the embedded world. Can someone tell me why we aren't on 128 bit chips or more by now?

      It's Moore's law.

      With doubling computer-capacity almost every year or so, you hit the addressing limits of a 4 bit processor pretty quickly, the 4 extra bits of an 8bit processor will last you about 4 years (78-82). The "life time" of 16 bit processsors is therefore about 8 years (80-88), and 32 bit should last some 16 years ('88-2004) before you regularly hit the adressing limit of the processor.

      Sure there are some "advanced" processors that are ahead of the curve. The 32bit 68000 was launched in '80. The alpha has been 64 bit for quite a while already. But the mainstream will have to move to 64 bit in a couple of years.

      Roger.
  • Surely porting of applications written in C and other high level languages shouldn`t be difficult. In that respect Itanium is nothing new, a 64bit little-endian architecture.. alpha anyone?
    64bit machines have been commercially available for atleast 10 years, you`d think coders would have got used to writing 64bit clean software by now.
  • nvidia [nvidia.com] already has drivers out [nvidia.com] for Linux/IA64 with some of their higher end cards (quadro line).
  • asked in sincerity: does this mean faster chips, or what?
    • The Itanium isn't really a "faster chip". If you count clock cycles, it's actually slower than the mainstream. It gets its speed from better instruction scheduling, so that each clock cycle does more work. The architecture provides for this in several ways:

      • The processor can assume that a sequence of instructions can all be executed simultaneously, unless it is told otherwise. This passes the burden of figuring this out from the processor (which has to do it on every run) to the compiler (which will have to do it only at compile time, and has a lot more information to work with too).
      • The architecture is designed so that conditional jumps are needed less often. Conditional jumps interfere with efficient instruction scheduling by making it harder to predict which instructions should be executed next, so reducing the need for them is a good thing.
      • There are lots of general-purpose registers, which means that fewer instructions are wasted on just shuffling data between the registers and main memory. This is an old trick, but the x86 architecture is even older... when I look at a compiled x86 program, about half of the instructions seem to be that kind of data shuffling. What makes it worse is that main memory is significantly slower than the processor.
      Note that the first point creates a problem mentioned in the article: it relies on the compiler to determine which instructions can be executed together. It is difficult for a compiler to take full advantage of this, particularly when compiling C code. It's an interesting area for experimentation, though.
  • Use Java (Score:3, Funny)

    by matsh ( 30900 ) on Thursday May 16, 2002 @04:57PM (#3533025) Homepage
    And forget about the problem!

    Mats
    • One more Java dumb.
      He is speaking about writing SYSTEM software?
      Have you done any driver development on Java recently?
    • Java might be a good cross-platform development language, but I haven't seen a JRE for IA-64 at this point.

      Is there a JRE for IA-64? How can Java bytecode be executed/interpreted on Itanium systems at this stage?

      Does the IA-32 emulation work with a IA-32 JRE? If so, wouldn't the dual layers of Java and IA-32 emulation make it too slow to be practical?
  • by leek ( 579908 )
    The article is inaccurate.

    First of all, IA-64 is now called IPF (Itanium Processor Family), although I've heard rumors that this is changing again, to a third name.

    Although the initial acceptance of Itanium-based servers and workstations has been slow, there is little doubt that it will eventually succeed in becoming the next-generation platform.

    Actually, as /. readers know, there have been some doubts. Itanium is 5 years late. Right now Itanium ranks lowest in SPEC numbers, and Itanium 2 (McKinley), while it addresses some of the problems, can't expect to compete with Hammer or Yamhill when it comes to integer code.

    For tight floating-point loops, Itanium 2 is great -- 4 FP loads + 2 FMAs per clock. But on integer code with lots of unpredictable branches, the entire IPF architecture leaves a lot to be desired. Speculation and predication were supposed to address that, but it is very hard for compilers to exploit speculation, and predication does not address issues such as the limitations of static scheduling.

    (Also, Itanium 2 removes any benefit that the SIMD instructions had on Itanium, because on Itanium 2, SIMD instructions such as FPMA are split and issued to both FPU ports, negating any performance benefit they had on Itanium. So while Itanium can perform 8 FP ops per clock with FPMA, Itanium 2 can only perform 4 FP ops per clock. This does not look good for the future of IPF implementations. But Itanium 2's bigger memory bandwidth is probably more important than SIMD instructions anyway. Itanium 2 is built more for servers, while Itanium is built more for workstations, which might benefit from SIMD MMU instructions, although the rest of Itanium, and its price/performance, make almost anything else better.)

    Superscalar processors with dynamic scheduling are improving much better than was expected during IPF's design (witness the P4 and AMD chips). So Itanium's static instruction scheduling design may be a liability more than an asset today. It puts considerable burden on the compiler.

    The x86 emulation and stacked register windows take up a lot of real estate on the chip, which could be better used for something else.

    The IA64 can be thought of as a traditional RISC CPU with an almost unlimited number of registers.

    Nonsense!!! No CPU has unlimited registers. When writing code by hand or with a compiler, registers are a limited resource which are used up quickly.

    And even though IPF has "stacked" general purpose registers which are windowed in a circular queue with a lazy backing store, these windows are of limited utility in real code. How many times does real code use subroutine calls which can take heavy advantage of register windows, before call branch penalties start to negate any benefit the windowing provides?

    It's a great idea in theory, but windowing just adds to the complexity of the implementation, taking up real estate that could be better used elsewhere.

    The IA64 has another very important property: It is both PA-RISC 8000 compatible and IA32 compatible. You can thus boot Linux/IA64, HP-UX 11.0, and Windows on an Itanium-powered box.

    Absolutely false: PA-RISC emulation was dropped years ago, before the first implementation, although it was originally planned. Also, HP-UX 11.0, which is PA-RISC only, is not supported on IPF. Only HP-UX 11.20 and later are supported. HP-UX 11.22 is the first customer-visible release of HP-UX on IPF.

    The endianism (bit ordering) is still "little," just like on the IA32, so you don't have to worry about that at all.

    Misleading -- the endianism is still a part of the processor state (i.e. context-dependent). This means it can be both big and little endian, and can switch when an OS switches context. HP-UX, for example, is big-endian on IPF.

    The rest of the article had generic ANSI C programming tips which everyone knows already -- nothing specific to IPF.

  • I thought all the fuss over the IA64 was because it was not backward compatible? The chip is used for new HP/UX 11.2 boxes and some big PC servers but there are special versions of Linux / Oracle for that. Is the issue purely performance based? When AMD announced the x86-64 chip named Hammer articles are saying things like:

    "Intel's Itanium processors handle 64 bits, but the Pentium family handles 32 bits."

    "The Hammer family of processors ... will be able to run conventional 32 bit applications ... as well as 64 bit applications"

    The press anouncements also got Intel to change its mind and start developing a new 32/64 bit combo chip.

    • The IA64 cpus to my knowledge are backwards compatible - but through some form of hardware emulation.

      This is in contrast to the AMD-64 bit architecture in that the AMD cpus retain the full IA32 register/instruction set, and simply add new instructions and a few 64 bit registers.

      This means the AMD cpus run IA32 code much quicker, but the intel 64 bit cpus are quicker when running native code.

      At least thats how I understand it, in laymans terms.

      smash.

The only function of economic forecasting is to make astrology look respectable. -- John Kenneth Galbraith

Working...