Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Porting Linux Software to the IA64 Platform 160

axehind writes "In this Byte.com article, Dr Moshe Bar explains some of the differences between IA32 and IA64. He also explains some things to watch out for when porting applications to the IA64 architecture."
This discussion has been archived. No new comments can be posted.

Porting Linux Software to the IA64 Platform

Comments Filter:
  • by Anonymous Coward on Thursday May 16, 2002 @03:47PM (#3532258)
    AND IT FEELS GOOD!
  • by Ed Avis ( 5917 ) <ed@membled.com> on Thursday May 16, 2002 @04:13PM (#3532385) Homepage
    From the article:
    Quite obviously, inline assembly must be rewritten from scratch.

    I don't see what is so obvious - isn't one of the selling points of Itanium its backward i386 compatibility? Even if running the 64-bit version of Linux it should still be possible to switch the processor into i386-compatible mode to execute some 386 opcodes and then back again. After all, the claim is that old Linux/i386 binaries will continue to work. Or is there some factor that means the choice of 32 bit vs 64 bit code must be made process-by-process?

    Interesting question: which would run faster, hand-optimized i386 code running under emulation on an Itanium, or native IA-64 code produced by gcc? They say that writing a decent IA-64 compiler is difficult, and I'm sure Intel has put a lot of work into making the backwards compatibility perform at a reasonable speed (if not quite as fast as a P4 at the same clock).

  • by Wildcat J ( 552122 ) on Thursday May 16, 2002 @05:02PM (#3532666)
    When I was in college, the only assembly programming we did was for MIPS. For our compiler project, we originally put out MIPS assembly and then retargeted it for the Sparc. I never once had to do any x86 assembly in school.

    There's really not that much demand for any assembly in the industry at large. Even microcode is being done in high-level languages these days. I would wager that most of the people doing assembly coding now are in highly specialized fields, especially embedded programming. So, there isn't necessarily any more demand for x86 assembly programmers than for any other (possibly non-standard) architecture. In my opinion (and this is only opinion), while you should learn an assembly language in school to understand the basic building blocks, the choice of architecture isn't crucial. However, since it's not crucial to learn one or the other, I think they should stick with a simple one. x86 is kind of a mess; MIPS was easy to learn. As far as access to the hardware goes, there are simulators for most processors, which is sufficient for education.

    -J

  • by Anonymous Coward on Thursday May 16, 2002 @05:35PM (#3532899)
    Look, IA64 and AMDs 64 bit instruction set are two very different things. One will succeed and one will fail, if the market doesn't dictate this Microsoft will. The IA64 products may never reach the performance of the competing chips and the price to performance ratio will NEVER touch that of the AMD 64 chips.

    Give me one reason anyone will care about the IA64 chips if cheaper faster 64bit chips will already be out.

    IA64 is significantly more expensive than the problem it was trying to solve. Oops.
  • by Chris Burke ( 6130 ) on Thursday May 16, 2002 @06:12PM (#3533114) Homepage
    It's really not that complicated.

    While 4-bit and 8-bit chips were cool and all, no one really thought they were -sufficient-. The limitations of an 8-bit machine hit in you in the face, even if you're coding fairly simple stuff. 16 bits was better but, despite an oft quoted presumption suggesting otherwise, that as well was clearly not going to work for too long.

    Then, 32 bits came around. With 32-bit machines, it was natural to work with up to around 4 GB of memory without any crude hacks. Doing arithmetic on fairly large numbers wasn't difficult either. The limitations of the machine were suddenly a lot farther away. Thus it took longer for those limitations to become a problem. You'll notice that for those spaces where 4GB was a limiting factor the switch to 64 bits happened a long time ago. The reason we are hearing so much about 64 bits now is that the "low end" servers that run on the commodity x86 architecture are getting to the point where 4GB isn't enough anymore. Eventually I imagine desktops will want 64 bits as well. I've already got 1.5GB in the workstation I'm typing this on.

    When will 128 bit chips come about? I don't know, but I'm sure it will take longer than it will take for 64 bits to become mainstream. The reason is simple: Exponential growth. Super-exponential, in a way. 64 bits isn't twice as big as 32 bits, it's 2^32 times bigger. While 2^32 was quite a bit of ram, 2^64 is really, really huge. I won't say that we'll never need more than 2^64 bytes of memory, but I feel confident it won't be any time soon.

    An interesting end to this: At some point, there -is- a maximum bit size. For some generation n with a bit size 2^n and a maximum memory space of 2^2^n you have reached the point where you could use the quantum state of every particle in the universe to store your data, and still have more than enough bits to address it. Though this won't hold true if, say, we discover that there are an infinite number of universes (that we can use to store more data). Heh.

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...