Porting Linux Software to the IA64 Platform 160
axehind writes "In this Byte.com article, Dr Moshe Bar explains some of the differences between IA32 and IA64. He also explains some things to watch out for when porting applications to the IA64 architecture."
Awesome! (Score:3, Funny)
Can someone please port nethack for us?
- A.P.
Re:Awesome! (Score:3, Informative)
Maybe you could try the patches here [debian.org]?
Re:Awesome! (Score:1)
This is largely thanks to the fact that linux already runs on 64-bit architectures -- Alpha, Sparc, etc. and most apps have been adapted to that already. There's not much conceptual difference in the high-level programmer's view between IA64 and any other 64-bit linux platform.
although to be fair (Score:1)
Major difference (Score:1, Funny)
There are a number of adjustments to make. (Score:1, Funny)
MOSIX + porting (Score:3, Funny)
Re:MOSIX + porting (Score:2, Funny)
He replied,"16-bit code?"
I sighed and moved on...
OT: your sig (Score:1)
How do you do that? It'd be great for off-topic posts like this one (that should be modded to 0 anyway)
Re:MOSIX + porting (Score:2)
Migration of a running process, even when going between identical processors, is expensive. Going even to a similar processor would be more so. (And going from, say, a Sparc to a m68k is totally out of the question, not that you're suggesting that.)
It's *really* hard to justify a policy of process migration in a cluster except with extremely long-running, massivley-parallel jobs. For most stuff, you'll waste less time just letting it finish. (GLUnix [berkeley.edu] does do process migration. Note that when you come back to a workstation that's been horfed by GLUnix, you'll be waiting about two minutes before you get your UI back.)
As for *starting* IA32 binaries on an IA64 processor, that's doable, but most cross-platform clustering systems function by keeping binaries for all their constituent processor types and having a hacked shell to convert PATH to the architecture-dependent path. (And by "most cross-platform clustering systems", I mean most that have been designed, since I know of none that work.)
Re:MOSIX + porting (Score:2)
You'd have to have a compiler that was smart enough to recognise when a pointer was cast to an int and then instead cast it to a long.
But now your code has changed, instead of a variable being an int, it's now a long - this is bound to cause problems elsewhere in your code !
I learnt this lesson long ago when somebody tried to compile a C program I'd written on an Alpha machine, and it complained about casting pointers to ints (I'd wrongly assumed pointers and ints would be the same size on every architecture).
What I do now is to typedef a pointer_t which can be either int or long, and make sure to use that everywhere pointer arithmetic is required.
Re:MOSIX + porting (Score:2)
Auch.
First: Keep pointers in pointer variables. Try not to cast them back and forth to integer variables.
If you have to, use longs. The C standard requires a long to be able to hold a pointer.
Roger.
Re:MOSIX + porting (Score:2)
(Note that NetBSD's code is primiarly arch-independent--the dependent stuff is mostly hardware initialization--and it compiles just fine on quite a wide array of processor architectures.)
Just learning assembly now (Score:1)
Re:Just learning assembly now (Score:1)
Re:Just learning assembly now (Score:1)
I would have to agree with them that there would be alot more demand for someone programming assembly on an Intel box then on a Mac.
I would say what we have done so far is fun though. Any programming can be if you make it.
Re:Just learning assembly now (Score:1, Funny)
Re:Just learning assembly now (Score:1)
Re:Just learning assembly now (Score:2, Insightful)
There's really not that much demand for any assembly in the industry at large. Even microcode is being done in high-level languages these days. I would wager that most of the people doing assembly coding now are in highly specialized fields, especially embedded programming. So, there isn't necessarily any more demand for x86 assembly programmers than for any other (possibly non-standard) architecture. In my opinion (and this is only opinion), while you should learn an assembly language in school to understand the basic building blocks, the choice of architecture isn't crucial. However, since it's not crucial to learn one or the other, I think they should stick with a simple one. x86 is kind of a mess; MIPS was easy to learn. As far as access to the hardware goes, there are simulators for most processors, which is sufficient for education.
-J
Re:Just learning assembly now (Score:2)
I would wager that most of the people doing assembly coding now are in highly specialized fields, especially embedded programming.
As an embedded systems designer I can tell you that even here in the embedded world, assembly x86 is nowhere to be found, except for maybe in the lowlevel init. Even there, though, it's used to get the environment ready for C and calls a C function to start all the real work, very much in the same manner as the Linux kernel source shows.
Assembly programming is everywhere in the embedded world, just not x86 or anything powerful enough to be able to use a C compiler. I routinely do large Microchip PIC [microchip.com] systems entirely in assembler, but that's only because of one of two reasons: they're not suited for C (the 18Cxxx is a different story now), or I need every last word of program and data space.
Re:Just learning assembly now (Score:1)
The x86 is and odd choice if that's the goal, because it just kludge upon kludge trying to make an 8 bit processor be 16 bit, then 32, and now 64. I don't know any x86 asm, but it is rather wonky and makes you jump through some hoops as I am told.
At OSU we are learning SPARC asm. When Sun went from 32 to 64 bit I think that for the most part they just had to change all the register sizes to 64 bit, because it was designed with the future a little bit more in mind than the x86.I'm just taking a really basic class (it's actually called "Introduction to Computer Systems"), so we aren't going to deal with things like the differences between a SPARC and UltraSPARC, but like I said it is apparently an easy transition. I'd imagine that the PPC is probably easy too. (Both are 32bit bigendian with the possibility of 64bit in the future designed in, I think)
What's the deal with IA64? (Score:1)
Re:What's the deal with IA64? (Score:1)
Re:What's the deal with IA64? (Score:1)
whether IA64 succeeded or not a few years down the
road when it is somewhere in its third generation..
Re:What's the deal with IA64? (Score:2, Informative)
Look for Sun and/or IBM to be selling 8-way Hammer machines by this time next year, according to my Spirit Guides.
Re:What's the deal with IA64? (Score:3, Informative)
What SPEC needs to benchmark is SPECInt-per-$. Considering that commodity Athlons, Pentiums, Celerons and Durons handily beat the extremely expensive Itanic in a straight SPECInt benchmark, what's the advantage of the IA64 performing more efficiently per mhz?
It was very silly of Intel to graft a 386 unit onto the IA64 chip, that's for sure. Fast int ops are important for running databases. They are essential in supporting that 64-bit I/O.
That's been Intel's promise since they announced the chip project many, many, many years ago. They also promised that the chip would be inexpensive. It isn't very fast, it isn't a good value compared to todays 32-bit commodity CPUs.
From what I've read, the Itanic scales in a way very similar to the Hammer -- 8 CPUs at a time and if you want more than you have to run a pipe between each group of eight. Hammer claims a Hypertransport link between each set with a one cycle wait state (Intels simply calls their a pipe), but really, anything more than 8-way is still going to be the realm of POWER4, UltraSparc, etc. IMO. To tell the truth, the Itanic and the X86-64 will have very similar scaleability, the x86-64 is less than half the die size of the Itanic and better performing. It's NUMA setup gives greater throughput between multiple CPUs in an 8-way or less. It may be ugly on the inside, but both CPUs do the about same thing. And one will be faster and a whole lot cheaper. And don't forget AMD's 4-way chipset. The Taiwanese motherboard makers are going to be moving into that space with this chipset. Commoditization.
Well, just take a 32-bit commodity CPU and kludge it to 64 bits, gain about 25% speedup in doing so and SELL IT FOR AROUND $400 maximum and you will quickly see that the Itanic is sinking! Sure the x86 instruction set is lame, but that's the roll of the dice. If the Motorola 68000 had been chosen by IBM for the PC, we would be singing the same tune. I think the x86 instruction set will be around ad infinitum. Just like the accellerator pedal is on the right side, the clutch is on the left and the brake pedal is in the middle. Totally arbitrary, but it somehow stuck.
The Itanic wasn't a piece of crap 5 years ago, but it is obsolete today. Intel raves about its "266mhz" memory bus and its 66mhz-64-bit PCI support. You can get this in a commodity motherboard and two Athlon CPUs for around $600. You can get the Pentium 4 with 133mhz X4 quad-pumped memory bus nowadays. The Itanic's parallel execution method is nice, but why did they wait till the CPU was released before they began making compilers that took advantage of this? Completely useless without the right tools (assuming decent tools can be made).
Re:What's the deal with IA64? (Score:2, Interesting)
Yeah, I mean its not like Intel knows how to develop chips or stay in business or anything.
Re:What's the deal with IA64? (Score:1, Insightful)
Give me one reason anyone will care about the IA64 chips if cheaper faster 64bit chips will already be out.
IA64 is significantly more expensive than the problem it was trying to solve. Oops.
size_t (Score:2, Informative)
return (char *) ((((long) cp) + 15) & ~15);
is not portable.
return (char *) ((((size_t) cp) + 15) & ~15);
is much better.
Re:size_t (Score:1, Informative)
What he doesn't mention, is that most Linux people have gcc, and last time I looked, the object code produced by gcc on IA64 was +20% of the speed of the intel compiler. This isn't a criticism of gcc, it's just that the IA64 arch. is so different that you absolutely _must_ have the intel compiler to get any performance out of it.
Re:size_t (Score:1)
Re:size_t (Score:1)
Ofcouse the largest margin is on the alpha platform where cxx outperforms gcc by 4x til 5x on floatingpoint and 2x-3x on integer. Ouch!
Re:size_t (Score:1)
Re:size_t (Score:1)
No. ptrdiff_t is a signed type. cp is a pointer, and hence an unsigned type. size_t is the correct type to use for the typecast.
The more things change ..... (Score:3, Informative)
When things were shifting from 16 to 32 bit (seems like just yesterday, oh wait, for M$ it was just yesterday), we had pretty much the same issues. Never had to do any 8 -> 16bit ports (since pretty much everything was either in BASIC, where it didn't matter, or assembler, which you couldn't "port" anyway).
Speaking of assembler, I guess the days of hand crafting code out of assembler is really going to take a hit if IA64 ever takes off. The assembler code would be so tied to a specific rev of EPIC, that it would be hard to justify the future expense of doing so. It would be interesting to see what type of tools are available for the assembler developer. Does the chip provide any enhanced debugging capabilities (keeping writes straight at a particular point in execution, can you see speculative writes too?). It'd be cool if the assembler IDE could automagically group parallelizable (is that a word?) together as you are coding.
Re:The more things change ..... (Score:4, Informative)
But the example you mention won't actually cause assembly writers any problems: the code won't be tied to a specific version of EPIC.
The IA-64 assembly contains so-called "stop bits", which specify that the instruction(s) following the bit cannot be run in parallel with those before the bit.
Those bits have nothing to do with the actual number of instructions that the machine is capable of handling.
For example, if a program consisted of 100 independent instructions, the assembly would not contain any stop bits. Now the actual machine implementation might only handle 2 or 4 or 8 instructions at a time, but that does not appear anywhere in the assembly. The only requirement is that the machine respect the stop bits.
Now, you might question how it deals with load-value dependencies (ie. load a value into a register, use that register). Obviously, the load and use must be on different sides of a stop bit, but that would still not guarantee correctness. I'm not sure how IA64 actually works (and someone should reply with the real answer) but I imagine that either: a) loads have a fixed max latency, and the compiler is required to insert as many stop bits between the load and the use to ensure correctness, or b) the machine will stall (like current machines).
Either way, the whole point of speculative loads is to avoid that being a problem.
Re:The more things change ..... (Score:2)
I too would be interested in hearing about how the cpu handles the dependencies. The only modern "general purpose" cpu that I know of that _doesn't_ stall is the MIPS.
Re:The more things change ..... (Score:1)
You'd think that starting out with a new architecture/ISA, they'd at least try to keep it simple and then let it grow hairy with age.
You're dead wrong (Score:1)
Furthermore, with the SIMD stuff in the newer x86 processors (MMX, SSE, SSE2), an asm programmer can get huge speedups which the compiler just doesn't know how to exploit. The Intel compiler will use these features in some instances, but far from optimally. Mind you, you have to know the processor well, and for the big wins, you have to optimize for a specific processor, but if you're doing computationally intensive stuff, the gains can be huge.
Re:You're dead wrong (Score:2)
Re:You're dead wrong (Score:2)
With a modern machine executing at 1Ghz, lets assume its throughput is close to 1 billion (10^9, I think these things are different in England) instructions per second.
So for any normal application (everything except weapon guidance, etc.) that runs for a few minutes, even if you can save 100 million (dynamic) instructions, you are not going to even notice. And just imagine how hard it is to eliminate 100 M dynamic instructions for a real, non trivial, program.
For IA-64, we expect a lot of performance to come from the fact that it can execute many instructions in parallel (thats what Intel is betting on).
It is much easier for a machine to find ILP, than for a human.
And I'm not claiming that there will not be a few cases where a programmer could write better assembly than the compiler, but that even an expert assembly programmer will get beat 99 out of 100 (at least) for IS-64.
As an aside, reorganizing data structures (usually to take advantage of the memory hiearchy) is a very hot research topic right now. Reorganizing algorithms, i.e. loop tiling, etc., has been studied for about 10 years, and is finally beginning to make its way into commercial compilers.
It was easier to beat a compiler when it was just doing register allocation for 4 GP registers. Now, as the compilers are getting more and more advanced, it is much much harder to do better than them.
Re:The more things change ..... (Score:2)
But did you notice that on Windows/IA64, even that won't work? They have a "strange P64 model", where ints and longs stay 32 bits and only pointers are 64 bits. So this kind of thing isn't even homogeneous within the architecture (the Windows guys will have to use long longs or _int64_t's explicitly, I guess).
i386 not designed for servers? (Score:3, Interesting)
Re:i386 not designed for servers? (Score:3, Interesting)
What's really funny is that I have an Intel propoganda book for the "brand new 80386." It spends two whole chapters talking about how the 386 is the perfect CPU for LAN servers. Of course, it also had to spend almost that much space describing what a LAN is and what a server might do, since very few people had ever heard of a LAN at that point, much less had one.
Re:i386 not designed for servers? (Score:1)
Re:i386 not designed for servers? (Score:2)
Re:i386 not designed for servers? (Score:2)
Re:i386 not designed for servers? (Score:3, Interesting)
Multics was pretty much tied to it's unique mainframe hardware with loads more weird addressing and virtual memory management features that would never have fit the paltry 275,000 transitors of the 80386. Also, at the time (1985) Multics was a legacy system; Unix was seen the operating system of the future, in particular because it was portable to microprocessors and didn't require much special hardware.
Re:i386 not designed for servers? (Score:2)
Debian on the IA64 (Score:5, Informative)
See here [debian.org] for more details
Re:Debian on the IA64 (Score:3, Informative)
Topic for #debian-ia64 is 95.70% up-to-date, 96.07% if also counting uploaded pkgs
There are over 8000 packages for i386 (the most up to date architecture) - ia64 currently has about 7650 or so packages built
More stats are available at buildd.debian.org/stats/ [debian.org]
Re:Debian on the IA64 (Score:1)
PA-RISC and IA32 Native Execution (Score:3, Interesting)
Does anyone remember the leaked benchmarks that showed the itanic executing IA32 code at roughly 10% of the speed of an equivalently-clocked PIII?
I wonder how it shapes up on PA-RISC performance?
It has to offer some sort of advantage over existing chips, or no one will buy it.
On the other hand, maybe its tremendous heat dissipation will reduce drastically when they remove all that circuitry for running IA32 and PA-RISC code.
Which leads me to think, why didn't they invest the time and money in software technology like dynamic recompilation, which Apple did very successfully when they made the transition from 69k to PPC?
Re:PA-RISC and IA32 Native Execution (Score:2)
IA-64 machines also offer firmware emulation of IA-32 system instructions. This allows you, in theory, to boot an unmodified IA-32 OS. I've never used it myself, however.
Last, the PA-RISC support is a piece of software integrated in HP-UX. There's no help from the hardware, except numerous design similarities (IA-64 began its life as HP PA-Wide Word). So you won't be able to run PA-RISC Linux binaries on IA-64 Linux any time soon...
Re:PA-RISC and IA32 Native Execution (Score:2)
So, PA-RISC is native via design. The x86 instructions were tacked on, origionally supposed to be an entire processor but proved to be to costly. You have to remember that x86 is hardly needed, as its mostly important for developers porting and testing applications, and for Microsoft to run 'legacy' applications. McKinly has a newer design that should boost the x86 performance substantially. If extra is needed, I'm sure something similar to Sun's x86 PCI card will be devised.
As to heat and the rest, taking out the x86 would help of course. From what I've heard, the control logic on current IA-64 chips is actually smaller then that of the Pentium 4, which was the point of the architecture - simplify. Simplifying meant spending more time on higher level logic rather OOO techniques, etc that could be done via software. The chip is so large due to *lots* of cache.
Anyways, a few good links are:
here [209.67.253.150] and here [clemson.edu].
Why can't i386 assembler be used? (Score:3, Insightful)
I don't see what is so obvious - isn't one of the selling points of Itanium its backward i386 compatibility? Even if running the 64-bit version of Linux it should still be possible to switch the processor into i386-compatible mode to execute some 386 opcodes and then back again. After all, the claim is that old Linux/i386 binaries will continue to work. Or is there some factor that means the choice of 32 bit vs 64 bit code must be made process-by-process?
Interesting question: which would run faster, hand-optimized i386 code running under emulation on an Itanium, or native IA-64 code produced by gcc? They say that writing a decent IA-64 compiler is difficult, and I'm sure Intel has put a lot of work into making the backwards compatibility perform at a reasonable speed (if not quite as fast as a P4 at the same clock).
Re:Why can't i386 assembler be used? (Score:4, Interesting)
If I remember clearly, the 386 instructions are interpreted instead of being on the chip. That means that those instructions will execute alot slower. It would work, but it wouldnt work well. Its nice because you could transition to IA 64 now and wait for the new software to arrive.
Personally, I dont think that selling point is that worthwhile, but Ill let Intel do their marketing without me.
Re:Why can't i386 assembler be used? (Score:1)
:)
Look up what happened when:
1. 80286 was emulating 8086 in protected mode
2. Pentium Pro was running 16-bit code
Re:Why can't i386 assembler be used? (Score:2)
Re:Why can't i386 assembler be used? (Score:1)
The end result is that it easn't to difficult to move architectures, even though the Alpha does not know the VAX instruction set and no interpreter was provided.
The only gotcha is that Digital had to provide some special extra instructions to implement some primitives used by the OS, such as interlocked queues.
Intel is primarily a hardware company so they would tend to ignore software solutions, but the one-architecture approach kept the Alpha from getting too complicated.
Re:Why can't i386 assembler be used? (Score:3, Informative)
In any case, what makes it difficult to write an IA-64 compiler is taking advantage of the things that the new instruction set lets you tell the processor. It's not hard to write code for the IA64 that's as good as some code for the i386. It's just that you won't get the benefits of the new architecture until you write better code, and the processors aren't optimized for running code that doesn't take advantage of the architecture.
Re:Why can't i386 assembler be used? (Score:2)
Re:Why can't i386 assembler be used? (Score:4, Informative)
Yes. Compatability. Nothing more. Your old apps will run, but not fast. It's basically a bullet point to try to make the transition to Itanium sound more palatable.
Or is there some factor that means the choice of 32 bit vs 64 bit code must be made process-by-process?
It is highly likely that the procedure to change from 64 to 32 bit mode is a privileged operation, meaning you need operating system intervention. Which means the operating system would have to provide an interface for user code to switch modes, just so a small block of inline assembly can be executed. I highly doubt such an interface exists (ick... IA-64 specific syscalls).
Interesting question: which would run faster, hand-optimized i386 code running under emulation on an Itanium, or native IA-64 code produced by gcc?
An interesting question, but one for which the answer is clear: gcc will be faster, and by a lot. Itanium is horrible at 32-bit code. It isn't designed for it, it has to emulate it, and it stinks a lot at it.
They say that writing a decent IA-64 compiler is difficult, and I'm sure Intel has put a lot of work into making the backwards compatibility perform at a reasonable speed (if not quite as fast as a P4 at the same clock).
Writing the compiler is difficult, but a surmountable task. And your surety does not enhance IA-64 32-bit support in any way. It is quite poor, well behind a P4 at the same clock, and of course at a much lower clock. Even with a highly sub-optimal compiler and the top-notch x86 assembly, you're better off going native on Itanium.
Re:Why can't i386 assembler be used? (Score:1)
The article was referring to inline assembly in the kernel code. The IA32 compatibility built into the IA64 CPU is strictly for user mode, all system functions are executed in IA64 mode. Although it would be technically possible to enter kernel mode, swith to the IA32 instruction set, exec some IA32 code and then swith back, in practice this is unfeasible. The IA32 code would be using different data structures and it couldn't call any of the kernel internal routines with somehow finding a way to swith from IA32 to IA64 mode and back on each subroutine call.
The problems of mixing IA32 and IA64 code, especially inside the kernel, are just too difficult and provide little benefit. For these reasons the Linux/IA64 team decided not to support this.
Re:Why can't i386 assembler be used? (Score:1)
NULL barfage (Score:3, Informative)
Re:NULL barfage (Score:3)
Indeed. In the particular case in question, passing a pointer to printf(), this should be (void *) 0 or (void *) NULL.
At least he's right when he says "The following is coded wrong." :-)
Bar is also mistaken on at least one other ANSI/ISO C-related point. He writes:
In fact, the Z modifier in the %Zu construction is non-standard. There was no portable way to print a size_t in the original ANSI/ISO C (C89). C99 (the 1999 revision of the ISO C standard) uses a lower-case z instead, so portable code should use %zu instead. Of course, the kernel is intended for compilation with gcc, not just any compiler, so Bar's example is correct for the kernel but is not (as he claims) standard.
How is that different from a PPC? (Score:3, Interesting)
Re:How is that different from a PPC? (Score:2)
Re:How is that different from a PPC? (Score:2)
PowerPC is 32-bit and IA64 is little endian.
Duh?
Re:How is that different from a PPC? (Score:2)
(After a quick check) It does seem like the PowerPC is a 64-bit chip (though maybe linux uses it as a 32-bit for some operations). Also, both PPC and Itanium can act like big-endian or little-endian.
Re:How is that different from a PPC? (Score:1)
From http://penguinppc.org/intro.shtml:
There are actually two separate ports of Linux to PowerPC: 32-bit and 64-bit. Most PowerPC cpus are 32-bit processors and thus run the 32-bit PowerPC/Linux kernel. 64-bit PowerPC cpus are currently only found in IBM's eServer pSeries and iSeries machines. The smaller 64-bit pSeries and iSeries machines can run the 32-bit kernel, using the cpu in 32-bit mode. This web page concentrates primarily on the 32-bit kernel. See the ppc64 site for details of the 64-bit kernel port.
Re:How is that different from a PPC? (Score:2)
Certainly none you're likely to be compiling software on with any kind of regularity. (By which I mean: Apple's never sold a 64-bit processor.
No FP in kernel? (Score:1)
Derek
Re:No FP in kernel? (Score:1)
This of course is not necessary if the CPU has two (at least partly) different sets of FP registers for kernel (supervisor, privileged,
Re:No FP in kernel? (Score:4, Informative)
1/ The massive amount of FP state in IA-64 (128 FP registers). So the linux kernel is compiled in such a way that only some FP registers can be used by the compiler. This means that on kernel entry and exit, only those FP registers need to be saved/restored. Also, by software conventions, these FP registers are "scratch" (modified by a call), so the kernel needs not save/restore them on a system call (which is seen as a call by the user code)
2/ The "software assist" for some FP operations. For instance, the FP divide and square root are not completely implemented in hardware (it's actually dependent on the particular IA-64 implementation, so future chips may implement it). For corner cases such as overflow, underflow, infinites, etc, the processor traps ("floating-point software assist" or FPSWA trap). The IA-64 Linux kernel designers decided to not support FPSWA from the kernel itself, which means that you can't do a FP divide in the kernel. I suspect this is what is more problematic for the application in question (load balancer doing FP computations, probably has some divides in there...)
XL: Programming in the large [sf.net]
Re:No FP in kernel? (Score:2)
It's not specific to IA64 or Linux-- PPC and IA32 also work this way, and Windows does the same thing. You can get around it, possibly, by inlining some assembly which saves and restores the FP registers before and after you use them. You need to be careful that the kernel won't switch out of context or go back to userland while you're using FP registers--preemptive kernels make this much harder.
However, there really aren't many reasons why you would want to use FP in the kernel in the first place. Real-time data acquisition and signal processing is the only example that comes to mind, but you'd be better off using something like RTLinux in that case.
Will 64 bit chips ever make it? (Score:3, Interesting)
Within four years 16 bit was the emerging standard for the desktop and four more than that 32 bit was emerging.
In the 12 years since then, well...
32 bit rules in both the desktop world and in the embedded world. Can someone tell me why we aren't on 128 bit chips or more by now? Why do 64 bit chips not amke it - is this a problem of the physics of mobos or what?
Re:Will 64 bit chips ever make it? (Score:5, Insightful)
While 4-bit and 8-bit chips were cool and all, no one really thought they were -sufficient-. The limitations of an 8-bit machine hit in you in the face, even if you're coding fairly simple stuff. 16 bits was better but, despite an oft quoted presumption suggesting otherwise, that as well was clearly not going to work for too long.
Then, 32 bits came around. With 32-bit machines, it was natural to work with up to around 4 GB of memory without any crude hacks. Doing arithmetic on fairly large numbers wasn't difficult either. The limitations of the machine were suddenly a lot farther away. Thus it took longer for those limitations to become a problem. You'll notice that for those spaces where 4GB was a limiting factor the switch to 64 bits happened a long time ago. The reason we are hearing so much about 64 bits now is that the "low end" servers that run on the commodity x86 architecture are getting to the point where 4GB isn't enough anymore. Eventually I imagine desktops will want 64 bits as well. I've already got 1.5GB in the workstation I'm typing this on.
When will 128 bit chips come about? I don't know, but I'm sure it will take longer than it will take for 64 bits to become mainstream. The reason is simple: Exponential growth. Super-exponential, in a way. 64 bits isn't twice as big as 32 bits, it's 2^32 times bigger. While 2^32 was quite a bit of ram, 2^64 is really, really huge. I won't say that we'll never need more than 2^64 bytes of memory, but I feel confident it won't be any time soon.
An interesting end to this: At some point, there -is- a maximum bit size. For some generation n with a bit size 2^n and a maximum memory space of 2^2^n you have reached the point where you could use the quantum state of every particle in the universe to store your data, and still have more than enough bits to address it. Though this won't hold true if, say, we discover that there are an infinite number of universes (that we can use to store more data). Heh.
Re:Will 64 bit chips ever make it? (Score:1)
The only thing that I see the 64-bit architecture getting you is more addressable memory (2^64 vs. 2^32). Most large scale systems these days are highly distributed; you throw lots of CPUs at the problem. You don't throw a large memory space at the problem.
You don't need a 64-bit processor to for the instruction size. RISC uses less, not more.
Of course, there are advantages to parallelization of the instruction pipeline, but multi-processor systems or vector processing units (Altivec rocks) are better at this.
I remember being involved in an early port on a 64-bit DEC Alpha. It was a pain in the butt and the performance gain wasn't enough to justify the expense.
-ch
Re:Will 64 bit chips ever make it? (Score:2)
Most large scale systems do have a large number of processing nodes, but each node needs to be able to access a large amount of data easily. 4GB isn't that much memory, even for one node. Besides, for inter-node communication, a unified memory space is the easiest method by far. For large multi-way servers (as opposed to Beowulf-style) this is also quite natural.
64 bits is a good thing. It's already a success. It's only in the low-end server and desktop markets where it still hasn't taken over.
Re:Will 64 bit chips ever make it? (Score:1)
Still, I don't see 64-bit systems working their way into the lower end of the market anytime soon. I distribute my work across many machines. It's cheap and easy to have lots of servers...
-ch
Re:Will 64 bit chips ever make it? (Score:2)
My reaction depends on how you define "node" or "larger system". I think of a "node" as a small processing unit of a few processors connected by a router to other nodes in some form of network. But the node need not be a standalone machine, and the network need not be cat-5. I think of a "large system" as more of the massively MP systems like mainframes and such that can have a thousand processors in one (big) box.
Anyway, there are generally two methods of doing IPC in such a system: shared memory, and message passing. While message passing has its advantages, it is much more difficult to program for. Shared memory is simple.
If you don't consider 1024-way mainframes "low end" (heh), the same argument is true in a 2-way server. You have two processors, each of which may need to access more than 4 GB of memory,
Still, I don't see 64-bit systems working their way into the lower end of the market anytime soon. I distribute my work across many machines. It's cheap and easy to have lots of servers...
As soon as your database gets larger than 4GB, you'll want 64 bits. Maybe your front end web server won't need it, but your back end will. And your back end may still be considered "low end".
But you are right, in that it won't be a fast transition. 4GB is still a lot to a lot of people. But that number will inevetiably decrease, and perhaps as more 64 bit applications come online in the low-end world sometimes known as wintel that will drive the change faster.
Re:Will 64 bit chips ever make it? (Score:2)
Re:Will 64 bit chips ever make it? (Score:1)
Z80s actually. Was amazed to see that they are still in use - in Gameboys (though that too has gone 32bit now)
Re:Will 64 bit chips ever make it? (Score:2)
It's Moore's law.
With doubling computer-capacity almost every year or so, you hit the addressing limits of a 4 bit processor pretty quickly, the 4 extra bits of an 8bit processor will last you about 4 years (78-82). The "life time" of 16 bit processsors is therefore about 8 years (80-88), and 32 bit should last some 16 years ('88-2004) before you regularly hit the adressing limit of the processor.
Sure there are some "advanced" processors that are ahead of the curve. The 32bit 68000 was launched in '80. The alpha has been 64 bit for quite a while already. But the mainstream will have to move to 64 bit in a couple of years.
Roger.
Porting applications. (Score:1)
64bit machines have been commercially available for atleast 10 years, you`d think coders would have got used to writing 64bit clean software by now.
It's pretty cool... (Score:2)
Re:It's pretty cool... (Score:2)
You might try getting to know the facts before posting, even as a coward.
stupid question (Score:1)
stupid answer (Score:1)
The Itanium isn't really a "faster chip". If you count clock cycles, it's actually slower than the mainstream. It gets its speed from better instruction scheduling, so that each clock cycle does more work. The architecture provides for this in several ways:
Use Java (Score:3, Funny)
Mats
Re:Use Java (Score:1)
He is speaking about writing SYSTEM software?
Have you done any driver development on Java recently?
Is there a JRE on IA-64? (Score:1)
Is there a JRE for IA-64? How can Java bytecode be executed/interpreted on Itanium systems at this stage?
Does the IA-32 emulation work with a IA-32 JRE? If so, wouldn't the dual layers of Java and IA-32 emulation make it too slow to be practical?
Article is inaccurate (Score:2, Interesting)
First of all, IA-64 is now called IPF (Itanium Processor Family), although I've heard rumors that this is changing again, to a third name.
Although the initial acceptance of Itanium-based servers and workstations has been slow, there is little doubt that it will eventually succeed in becoming the next-generation platform.
Actually, as /. readers know, there have been
some doubts. Itanium is 5 years late. Right now Itanium ranks lowest in
SPEC numbers, and Itanium 2 (McKinley), while
it addresses some of the problems, can't expect
to compete with Hammer or Yamhill when it comes
to integer code.
For tight floating-point loops, Itanium 2 is great -- 4 FP loads + 2 FMAs per clock. But on integer code with lots of unpredictable branches, the entire IPF architecture leaves a lot to be desired. Speculation and predication were supposed to address that, but it is very hard for compilers to exploit speculation, and predication does not address issues such as the limitations of static scheduling.
(Also, Itanium 2 removes any benefit that the SIMD instructions had on Itanium, because on Itanium 2, SIMD instructions such as FPMA are split and issued to both FPU ports, negating any performance benefit they had on Itanium. So while Itanium can perform 8 FP ops per clock with FPMA, Itanium 2 can only perform 4 FP ops per clock. This does not look good for the future of IPF implementations. But Itanium 2's bigger memory bandwidth is probably more important than SIMD instructions anyway. Itanium 2 is built more for servers, while Itanium is built more for workstations, which might benefit from SIMD MMU instructions, although the rest of Itanium, and its price/performance, make almost anything else better.)
Superscalar processors with dynamic scheduling are improving much better than was expected during IPF's design (witness the P4 and AMD chips). So Itanium's static instruction scheduling design may be a liability more than an asset today. It puts considerable burden on the compiler.
The x86 emulation and stacked register windows take up a lot of real estate on the chip, which could be better used for something else.
The IA64 can be thought of as a traditional RISC CPU with an almost unlimited number of registers.
Nonsense!!! No CPU has unlimited registers. When writing code by hand or with a compiler, registers are a limited resource which are used up quickly.
And even though IPF has "stacked" general purpose registers which are windowed in a circular queue with a lazy backing store, these windows are of limited utility in real code. How many times does real code use subroutine calls which can take heavy advantage of register windows, before call branch penalties start to negate any benefit the windowing provides?
It's a great idea in theory, but windowing just adds to the complexity of the implementation, taking up real estate that could be better used elsewhere.
The IA64 has another very important property: It is both PA-RISC 8000 compatible and IA32 compatible. You can thus boot Linux/IA64, HP-UX 11.0, and Windows on an Itanium-powered box.
Absolutely false: PA-RISC emulation was dropped years ago, before the first implementation, although it was originally planned. Also, HP-UX 11.0, which is PA-RISC only, is not supported on IPF. Only HP-UX 11.20 and later are supported. HP-UX 11.22 is the first customer-visible release of HP-UX on IPF.
The endianism (bit ordering) is still "little," just like on the IA32, so you don't have to worry about that at all.
Misleading -- the endianism is still a part of the processor state (i.e. context-dependent). This means it can be both big and little endian, and can switch when an OS switches context. HP-UX, for example, is big-endian on IPF.
The rest of the article had generic ANSI C programming tips which everyone knows already -- nothing specific to IPF.
IA64 backward compatible? (Score:1)
"Intel's Itanium processors handle 64 bits, but the Pentium family handles 32 bits."
"The Hammer family of processors ... will be able to run conventional 32 bit applications ... as well as 64 bit applications"
The press anouncements also got Intel to change its mind and start developing a new 32/64 bit combo chip.
Re:IA64 backward compatible? (Score:1)
This is in contrast to the AMD-64 bit architecture in that the AMD cpus retain the full IA32 register/instruction set, and simply add new instructions and a few 64 bit registers.
This means the AMD cpus run IA32 code much quicker, but the intel 64 bit cpus are quicker when running native code.
At least thats how I understand it, in laymans terms.
smash.
Re:printf() (Score:1)
Designing the rest of the API, writing the myprintf() function and dealing with macros with variable number of parameters is left as an exercise to the implementor.
Re:printf() (Score:2)
universe, would be:
printf( "%{pid}\n", pid );
printf( "%{uid_t}\n", getuid() );
etc.
The way it *does* work in the little universe where I am the king [sf.net] is:
This way is arguably better, because it's type safe, and easier on the users. Of course, since it's not Compatible With C, it will never be used by anybody
Re:Has anyone thought of... (Score:1)