Intel's Failed 64-bit Itanium CPUs Die Another Death as Linux Support Ends (arstechnica.com) 78
Officially, Intel's Itanium chips and their IA-64 architecture died back in 2021, when the company shipped its last processors. But failed technology often dies a million little deaths. From a report: To name just a few: Itanium also died in 2013, when Intel effectively decided to stop improving it; in 2017, when the last new Itanium CPUs shipped; in 2020, when the last Itanium-compatible version of Windows Server stopped getting updates; and in 2003, when AMD introduced a 64-bit processor lineup that didn't break compatibility with existing 32-bit x86 operating systems and applications.
Itanium is dying another death in the next version of the Linux kernel. According to Phoronix, all code related to Itanium support is being removed from the kernel in the upcoming 6.7 release after several months of deliberation. Linus Torvalds removed some 65,219 lines of Itanium-supporting code in a commit earlier this week, giving the architecture a "well-earned retirement as planned."
Itanium is dying another death in the next version of the Linux kernel. According to Phoronix, all code related to Itanium support is being removed from the kernel in the upcoming 6.7 release after several months of deliberation. Linus Torvalds removed some 65,219 lines of Itanium-supporting code in a commit earlier this week, giving the architecture a "well-earned retirement as planned."
Itanic (Score:5, Funny)
https://en.wiktionary.org/wiki... [wiktionary.org]
Re: (Score:2)
Turns out that VLIW (Very Long Instruction Word) computing is really, really, really hard to write compilers for [ycombinator.com] and that pretty much kept Itanium from ever being successful in a general computing environment
Re: (Score:2)
Sounds like a job for AI.
Re: (Score:2)
Which one the weird one or the one they just call Al?
Re: (Score:2)
Yes.
Re: (Score:2)
Offloading from the CPU (runtime) to the compiler (compile time) still sounds like a good idea to me. The compiler can also be improved and software re-compiled.
That CPU you bought last year will suck forever, despite the fact that they can perform some microcode updates (usually reducing performance while working around bugs)
Re: (Score:2)
Oh this is worth a click. Collect your "+5, funny" you magnificent bastard.
It seemed like a good idea at the time (Score:2)
But Intel vastly underestimated the difficulties involved in making it work as imagined
I admire their bravery and creativity and remind critics that many innovative new ideas fail
We need more companies willing to take expensive chances on promising, but difficult, ideas
Re: (Score:3)
I think the most recent example of it is the apple M1, where they did bet on a very wide execution unit.
It's said in some places that every extra parallel instruction you add to a super scalar chip, the size of the decode logic must double if you want to keep the same performance of the decode logic.
And they went from the standard now standard 4 parallel instructions to 8 parallel instructions, which would require either a 16 times bigger decode logic, or a decode logic that is worse per instruction.
My bet
Re: It seemed like a good idea at the time (Score:2)
Apple also control the software ecosystem so they could provide a seamless, and in many cases, a transparent upgrade path. Most old software runs, and universal binaries hid some complexity. I also donâ(TM)t remember Intel working so hard with third parties had software ported and optimised at launch time. Likewise, AMD provided a compatible solution so users had no problem with the transition.
Re: (Score:3)
Re: It seemed like a good idea at the time (Score:2)
Yes, exactly: without Microsoftâ(TM)s help, Intel didnâ(TM)t really have a chance.
Re: (Score:2)
But PH-UX was, well, PH-UX.
Come to think of it, PH-UX and the Itanic were pretty much made for each other.
Re: (Score:3)
Apple had an easier time because even the M3 today is still ARM compatible, so every ARM AArch64 binary produc
Re: (Score:2)
>It's said in some places that every extra parallel instruction you add to a super scalar chip, the size of the decode logic must double if you want to keep the same performance of the decode logic.
As far as I know, this simply isn't true. Also, the decode logic is small compared to the rest of the CPU, so even if it is true, it wouldn't be such a big deal.
I'm not up on implementation of modern ARMs, so I might be wrong, but I work for a CPU vendor and in my domain I think it's right for current generati
Re: (Score:2)
Apple's M CPUs include a GPU and all the system RAM on the processor. The decode logic has got to be such a ridiculously small percentage of the chip that it's not even worth thinking about the size of it.
Re: (Score:2)
In any modern amd64 processor instruction decode is so small that even if you doubled the size you'd still have to squint to see it. And that has to handle x86 and amd64!
Re: (Score:3)
I'm not sure it was a good idea at the time. When it was being developed, there were a lot of discussions about Itanium here and elsewhere and the general consensus was that going to a VLIW processor was premature because the compilers were not yet up to the task and there wasn't a real path forward that would bring a significant performance improvement.
It was sad watching HP retired PA-RISC to go all in on Itanium. The PA-RISC chips had huge (for the time) caches and were the best architecture (in my opini
Re: (Score:3)
Re:It seemed like a good idea at the time (Score:4, Interesting)
It was a good idea at the time, but Intel did not know AMD was busy and was going to release a 64-bit cpu with 18months of the first decent Itaniums being shipped.
AMD's 64-bit cpu sneak attack had a lot to do with Itanium not being viable. It removed most of the Itanium advantages and left Itanium with the limited production price disadvantage that it was never able to overcome. I am not going to buy an Itanium that costs several thousand more and always uses full power over an 64-bit AMD that uses way less power at idle.
The 2004 Itaniums had ZERO power savings. I put a power meter on multiple boxes and they used the same wattage +-10 watts no matter if the OS was idle or if it was running a benchmark. So there is the extra power costs and cooling costs. And compared to an equivalent dual socket AMD the itaniums also used more power running a benchmark vs the AMD with the same benchmark.
Re: (Score:3)
It was a good idea at the time, but Intel did not know AMD was busy and was going to release a 64-bit cpu with 18months of the first decent Itaniums being shipped.
Given every RISC processor had 64-bit versions at the time, it's hard to imagine Intel didn't know 64-bit was inevitable. And I doubt Andy "only the paranoid survive" Grove didn't know AMD was feverishly working on 64-bit extensions.
The 2004 Itaniums had ZERO power savings. I put a power meter on multiple boxes and they used the same wattage +-10 watts no matter if the OS was idle or if it was running a benchmark.
I don't think that was a design goal. No one cared about server power draw until around 2006 or so. That's when power suddenly became important and the race to ever higher clock rates screeched to a halt.
The goal if Itanium was always about cycle time and masking memory latency.
Re: (Score:2)
And I doubt Andy "only the paranoid survive" Grove didn't know AMD was feverishly working on 64-bit extensions.
Interesting question. I believe they thought they could successfully partition the "enterprise" and desktop markets and continue to summon ridiculous margins from the business unit while containing fast desktop machines to gaming and other cheapifying software forces. The AMD64 ISA offered the exact opposite vision and we all see how that turned out. I thought it especially entertaining to watch Intel have to negotiate AMD64 licensing.
I don't think that was a design goal.
I noticed significant power savings starting with the rx2800 series serve
Re: (Score:2)
The main thing I remember about the Itanic was that any lack of memory alignment killed performance to a much greater degree than it did on other platforms. Alignment faults caused many wasted cycles and were/are easy to cause. I have to admit to liking this ISA the least of any I've had to work on. Their promises even started off sounding like BS, because it all depended on having the magic compiler complete the basic design goals... sooooo where was it? We never got it!
Well. Having worked at HP at the time and listened into many break room gripe sessions about Intel, the common complaints were about the Intel Value Engine (IVE). Intel demanded Itanium include it, HP strongly preferred transpiling and software emulation. The IVE let an Itanium natively run x86 code. The interrupt mechanism in particular caused hissy fits for the people designing the actual cores.
While mis-aligned memory loads could have been a problem (maybe that's what was causing all the interrupts), I'm
Re: (Score:2)
Having worked at HP at the time and listened into many break room gripe sessions
How long were you there? I'm ex-IBM, ex-SGI, but never HP. Did you catch the the whole internal drama around the x86 port of HP-UX? I'd love to hear that story. Have any opinions on where HP-UX goes from here, if anywhere?
I'm pretty confident the hardware/BIOS guys could have figured that out.
I bet you are right. However, I think the politics and business-side of things got so complicated and legally fraught with peril that the technical issues were secondary, I think.
Re: (Score:2)
How long were you there? I'm ex-IBM, ex-SGI, but never HP. Did you catch the the whole internal drama around the x86 port of HP-UX? I'd love to hear that story. Have any opinions on where HP-UX goes from here, if anywhere?
I left HP-UX development around 1998. 11.0 was in flight at the time. I got out long before we started working on x86 (not really "we", I was at another company by then). When I left, getting HP-UX running on native Itanium was the focus.
What's the future of HP-UX? Not much would be my guess. At most, open source the code and let people hack away at it. Linux has long since eclipsed HP-UX in performance, platforms, features, and application support. There's no reason to have a gratuitously different niche c
Re: (Score:2)
Re: (Score:2)
Given every RISC processor had 64-bit versions at the time, it's hard to imagine Intel didn't know 64-bit was inevitable. And I doubt Andy "only the paranoid survive" Grove didn't know AMD was feverishly working on 64-bit extensions.
The general vibe at the time was that x86 was awful and that extending it further wasn't viable. Pretty much everyone assumed something would replace it, and that was the big battleground. Intel might have known that AMD was looking into it, but I don't think they took the threat seriously.
The goal if Itanium was always about cycle time and masking memory latency. Tons of architectural decisions were made with the assumption memory was slow and so you needed ways to hide it. I don't think we (I worked on HP-UX at the time) had any idea that multi-core would take off like it did. We implicitly assumed a single thread would be running and that the ISA/control unit would need to keep enough balls in the air. Turned out throwing transistors at many cores worked much better.
It's fascinating to look back and see how wrong people were about that. This is the era of the Cell Processor, which only really got used in the PlayStation 3. It's such a weird design - a traditional PowerPC core surroun
Re: (Score:2)
It's fascinating to look back and see how wrong people were about that. This is the era of the Cell Processor, which only really got used in the PlayStation 3. It's such a weird design - a traditional PowerPC core surrounded by a lot of very fast but very limited mini cores. People were starting to realize that multiple cores was the future, but they hadn't yet realized that the cores would be full cores. By the time Cell made it to production, people had already mostly realized it was a bad idea.
It's much
Re: (Score:2)
I think the original idea was that the cells would act as the GPU on the PS3, but once they started testing it, they realized it couldn't come close to the performance needed. They had rework the design to include a traditional GPU, and that's a large part of why the PS3 was so much more expensive than the Xbox 360. (BluRay being the other reason)
It hit a weird spot somewhere in between modern CPUs and GPUs. In the end it wasn't good enough at either role. But if you had just the right type of workload, it
Re: (Score:2)
The general vibe at the time was that x86 was awful and that extending it further wasn't viable. Pretty much everyone assumed something would replace it,
Everyone but Intel and AMD.
It's fascinating to look back and see how wrong people were about that. This is the era of the Cell Processor, which only really got used in the PlayStation 3. It's such a weird design - a traditional PowerPC core surrounded by a lot of very fast but very limited mini cores.
And yet, now we're seeing mass market CPUs with performance cores, low power cores, AI cores, graphics cores, all sorts of specialized hardware you turn on or off based on the load.
I'm trying to remember the RISC V architecture. Does it allow for multiple special purpose cores in the architecture? Or is it silent on that, leaving it to system designer?
Re: (Score:3, Interesting)
Arguably what AMD succeeded at after them was significantly more difficult, but they still made it work and they were even able to make it do so cheaply. Intel rested on their laurels, confident they could just keep fudging benchmarks and bribing vendors to maintain market dominance. Keep in mind, in their plan, home users would never get 64-bit technology. They planned on ceding ground there so they could squeeze datacenter customers harder. This is unacceptable on a number of levels and I'm glad it ended
Re: (Score:2)
Intel did the same thing when moving from the 16 bit 286 to the 32 bit 386.
Re: (Score:2)
Arguably what AMD succeeded at after them was significantly more difficult, but they still made it work and they were even able to make it do so cheaply. Intel rested on their laurels, confident they could just keep fudging benchmarks and bribing vendors to maintain market dominance. Keep in mind, in their plan, home users would never get 64-bit technology. They planned on ceding ground there so they could squeeze datacenter customers harder. This is unacceptable on a number of levels and I'm glad it ended with a faceplant.
I'm not sure it was more difficult, just a better idea.
x86 was always known to be a complicated architecture, but it was king so people figured it out. Intel figured they'd make an awesome new architecture from scratch for Itanium, and maybe it was awesome, but the problem is that everyone else had to learn it from scratch.
AMD just extended x86 and everybody went with that since it was easier.
Re: (Score:1)
I'm not sure it was more difficult, just a better idea.
Well, it was definitely a better idea, but keep in mind that up until the day after AMD pulled this off, Intel was still telling everyone that making the jump to 64-bit while maintaining binary compatibility with 32-bit architectures was [literally impossible] and they were using this to justify a lot of their egregious pricing.
Re:It seemed like a good idea at the time (Score:4, Informative)
What is sad is that there were far more capable architecture available at the time, like DEC ALPHA, and Intel pressured their customers to "capture and kill" ALPHA thinking that would give them a clear path to Itanium supremacy...
Re: (Score:2)
That's what I remember too. Itanic killed Alpha (through marketing, not performance), and now it's crashed into its iceberg. RIP Alpha. A pox on you Itanic!
Re: (Score:2)
That's the part that's unforgivable. They steamrolled everything else flat hoping that if they were the only 64 bit game in town, they could sell it even if it wasn't as good.
Thankfully AMD came in and ate their lunch.
Intel didn't even make sure their own compiler could produce decent performing code on Itanic.
Re: It seemed like a good idea at the time (Score:2)
Alpha died because it wouldn't scale further without substantial rework.
It was cheaper to put the good parts of the design into a PC processor, which AMD did.
Re: (Score:2)
Alpha had a planned roadmap of three decades and EV8 (developed but un-taped) would have included vector processing units, 2 cores and 4.0Ghz for 2003 production
In addition, they used the HyperTransport interconnect that that continues to be used in multiple processors today from AMD, Apple and IBM [wikipedia.org]
Interestingly enough, China has "reverse engineered" DEC ALPHA for Sunway supercomputer chip [nextplatform.com]
You say funny things, but I do not think that you are trying to be funny
Re: (Score:2)
Alpha had a planned roadmap of three decades and EV8 (developed but un-taped) would have included vector processing units, 2 cores and 4.0Ghz for 2003 production
I can draw you a map to a place that doesn't exist, too. They were having trouble even getting Alpha into the GHz range.
Re: (Score:3)
Alpha had a planned roadmap of three decades and EV8 (developed but un-taped) would have included vector processing units, 2 cores and 4.0Ghz for 2003 production
I wouldn't put much weight into that. Intel's roadmap then was "We'll have a 10 GHz Pentium 4 in a couple years."
That point in time was right before everyone realized that heat was going to be a way bigger limiting factor in chip design than it had been in the past.
Re: (Score:2)
PA-RISC was also killed off. I'm fuzzy on the history of Itanium, but apparently HP decided of off their own CPU design and get in bed with Intel, a move that confuses me greatly. PA-RISC wasn't spectacular, but it was competitive, and apparently had some very good SIMD support at the time.
Re: (Score:2)
Technically, it was a good idea. However comercially, Intel was a victim of it's own monopoly in the PC market, and by not being sufficiently backwards compatible. Of course, it also encountered development delays and eventually ended up being slower than a competing RISC chip with similar transistor counts. A snag Intel has had a long time - it likes the golden goose revenue generation of x86 family, but it also needs to innovate with something new, but in ways that don't harm the golden goose.
Re: (Score:1)
The main thing they underestimated, was the extent to which customers value compatibility with existing firmly-established stuff. The entire computing world heard that Intel was working on a new 64-bit architecture and collectively went "Ok, so what's it compatible with?" And the answer was nothing, and that was pretty much the end of that.
Intel's plan for this was stupid and greedy. (Score:2)
All their plans are greedy, but this one was also particularly stupid and eventually the universe itself rejected it.
The End.
Re: (Score:1)
Eh, I don't disagree in principle, but I'd rather have Commodore support back than this.
Re: (Score:2)
Linux is pretty well supported on Commodores at this point. Sometimes the kernel versions lag behind a little bit but there's versions of 6.x for Amiga working. Most of the focus tends to be on accelerated and modified machines, as well as more modern m68k variant chips, but there are plenty of people keeping it alive even on original hardware.
Re: (Score:1)
Wait, really? In mainline? I was talking about 6502's but, even that Amiga support is still in there is a surprise to me. I assumed you'd have to resort to some external fork or patchset like these Itanium users will soon have to.
Re: (Score:1)
Oh, you did mean with an external patchset or something like that, didn't you? Well, better than nothing. Drop a link here and maybe it'll get some more traffic. I'm sure there's still other Commodore refugees out there like me, yet to be rescued.
Re: (Score:2)
It's likely that there are considerably more amiga users out there than ia64 users.
The amiga was widely available and affordable, and there are still people who have fond memories of the platform. Old amigas, especially the highend ones capable of running linux are highly sort after and command a premium price.
On the other hand IA64 was always expensive and niche.
Re: (Score:2)
I used NetBSD for testing my 68030 MMU changes in WinUAE. They still have mainline support for the platform.
Re: (Score:2)
I bet if there were half a dozen kernel developers whose primary focus was Itanium support (like, if Mark Zuckerberg had Itanium as one of his personal interests and personally bankrolled a small IT shop whose sole focus was maintaining Itanium ports) Linus would never have decided to drop the architecture. As long as an old architecture doesn't get in the way of supporting new stuff (e.g., the 386 and 486) they will keep stuff around as long as developers actively maintain it. Fact is nobody is using Itanium anymore and it's not worth wasting the scarce time of kernel developers who have the knowledge and resources to maintain the Itanium port.
They could spend all their time trying to get GCC to constantly compile decent code on Itaniums. Either that or create a new language and compiler that makes it possible to write performant code on Itaniums.
Re: (Score:2)
It's Linux. If you want to add Itanium support back, then you can create your own fork. Also they are removing it from the next version of Linux not removing it retroactively. For the general Linux base, not many people out there are using Itanium anymore. What users did exist are mostly corporate who will replace the servers in a few years if they have not already replaced them.
It is not driven by commercial industry but rather what is practical as Linux is used for a large number of processors families an
Re: I wish Linux got back to it's roots. (Score:3)
Re: (Score:2)
I'm not sorry to see Itanic go. I'm guessing there aare approximately 0 home/hobbyist users of Itanic out there. The damned CPUs cost an order of magnitude more than x86_64 in the first place. The few still in service are likely stuffed in a closet somewhere in the corporate world running legacy applications. They don't need updates.
If I'm wrong, the code is available and they can always backport any security patches even after the last LTS kernel supporting Itanic is retired. Even in that unlikely case, gi
Re: (Score:3)
So you want to speak for other peoples' time and resources to keep your pet project viable?
That's not how open source works. If it's so important to you, YOU maintain it.
Netcraft confirms it! (Score:2)
ah, you know the rest....
still there in LTS (Score:2)
Here's the actual, more complete merge comment:
The ia64 architecture gets its well-earned retirement as planned,
now that there is one last (mostly) working release that will be
maintained as an LTS kernel.
So it isn't like kernel support is going completely away. It just won't be included in future releases.
Who was using it? (Score:2)
Outside of cluster use, who was running linux on Itanium boxes in production? Chances are you bought Itanium because you needed to run OpenVMS.
Re: Who was using it? (Score:3)
Or HPUX. I used to manage a farm of Superdomes loaded with Itanic CPUs.
There was a time they kicked Sunâ(TM)s ass. The I/O on Superdome was far superior to the Sun Fire x800 line at the time.
HPUX never caught on quite as hot as it should have. It had its quirks but man, once you had it humming, it just cruised.
Part of me (a very small part) misses it.
Re: (Score:2)
Re: (Score:2)
We had a bad nickname for our Superdomes. We had 100's of HPUX machines 90% of which we never touched/repaired. But we had a few Stuperdomes have significant hardware issues more than a few times (random hardware died). I am pretty sure one of the oncalls touched each of ours at least 2x to either do a engineering change (based on a prior hardware defect/crash) or get it back working again because some piece of hardware died and would not let the machine come back up.
Now, comparing the to the IBM hardwar
Re: (Score:2)
Re: (Score:2)
I don't believe I ever used one of the PA/RISC superdomes. The Itanium ones were new design/engineering and troublesome. I do know that the PA/Risc workstations and such I managed 5-8 years earlier rarely gave us much trouble. We found out one of them had been our DNS server for the entire site and had never crashed in 5+ years. The DNS team figured out the risk from the single piece of hw/os and went to a "highly" reliable dual hardware/software setup and has proceeded to have multiple dns outages in
Re: (Score:2)
OK. And a very small part of me misses programming in Regal on the Harris computer. But it's a *REALLY* small part.
Re: (Score:2)
I had a contract to do ipsec between windows and hpsux on itanic. I found that the examples in the ipsec documentation were backwards regarding which end was which. Tried them verbatim and nothing worked. Reversed them and they worked fine. Hilarious.
It was dead way before 2013. (Score:2)
It was dead way before 2013, HPE would just not admit it. By 2006 (outside of HPUX/HPE) no one bought them. The integer performance was not as good as an X86/AMD and they only really beat X86/AMD on certain floating point operations. The same speed of most applications on X86/AMD (motherboard + 2 cpus) was about 1/2 the price (about $1000+ less). And the Itaniums needed different power supplies and motherboard mounts so more $$ there too. The Itaniums cpus cost more and the motherboards cost more and
Re: (Score:1)
even in 2008 Itanium2 held database benchmarks for performance... for those that didn't care about price/performance but absolute numbers. I worked for place that sold the systems. After that Itanium2 went into decline.
A failure of monopoly (Score:4, Insightful)
By pushing a wildly incomparable architecture where they owned the IP, they intended to control the desktop until the end of time. All it would take is buy-in by Microsoft and with the depreciation of x86 for Windows and they would have succeeded. No other IC company would have the resources to complete in that situation.
Unfortunately for Intel, and fortunately for everyone else on the planet, AMD did a good job of compatibly extending the x86 instruction set. The flop of the Itanium and success of AMD kept us from a stagnant inefficient computing culture that would have strangled innovation and kept computing expensive. In that repressive scenario it's not clear that an alternative like ARM would have had a chance, much less the current emergence of RISC-V.
The same situation is happening right now with Google search. Their 90% market share and excessive profits are hindering web innovation. Online consumers are paying a toll to Google and receiving biased results driven by paid advertising. SEO is a polite way of saying extortion/bribery.
At this time there is no other credible competition, despite the hype about AI driven search. Google can keep buying search dominance and still afford to make some AI enabled solution, and even if it isn't the best they can stay on top.
Without intervention by US or EU regulators the situation will continue to deteriorate. No matter how much they claim to be doing a great job, no single company can realistically create an efficient, fair, and flexible future. Monopoly always ends up in failure, and that is the way we are headed.
Intervention required? (Score:2)
Unfortunately for Intel, and fortunately for everyone else on the planet, AMD did a good job of compatibly extending the x86 instruction set.
...
The same situation is happening right now with Google search.
...
Without intervention by US or EU regulators the situation will continue to deteriorate.
If this is the same situation, and the previous situation (with Intel) didn't need intervention, why would the situation now (with Google) require intervention?
Arguably, the situation now is simpler, as it's much easier to launch The Next Big Thing as a software startup than the to get a new IC contender off the ground.
Itanic sank long ago (Score:2)
Intel got so much wrong... (Score:2)
I was an Intel reseller back in the 1990s/early aughts. They'd have these get-togethers for us where they'd announce new products, lay out roadmaps, etc. And so much of what they touted never came to pass.
I remember being told for a couple of years how RAMbus was going to provide significant speed increases over SDRAM; that it would initially be "a little" more expensive but would rapidly reach cost parity and ultimately be cheaper. The Pentium 4 "was designed for RAMbus" and later P4 chipsets wouldn't e