Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Linux Software

Linus: Praying for Hammer to Win 487

An anonymous reader writes "The boys at Intel can't be happy with the latest opposition to the IA-64 instruction set. According to this Inquirer scoop, Linus himself has weighed in, and it appears he's putting his eggs in the x86-64 basket. In the original usenet post, he goes so far as to say that 'We're ... praying that AMD's x86-64 succeeds in the market, forcing Intel to make Yamhill their standard platform.'"
This discussion has been archived. No new comments can be posted.

Linus: Praying for Hammer to Win

Comments Filter:
  • by sgtsanity ( 568914 ) on Monday July 29, 2002 @03:34PM (#3973420)
    Now if AMD can get the endorsement of "The Carmack", they will really be happy.
  • Momentum (Score:2, Funny)

    by crumbz ( 41803 )
    To me this is an impressive endorsement. Given the overall support that AMD has given Linux over the years, it is great to see a little bit of that given back.

    Cool. x86 through 2086!
    • Re:Momentum (Score:5, Insightful)

      by Anonymous Canard ( 594978 ) on Monday July 29, 2002 @03:50PM (#3973537)
      To me this is an impressive endorsement. Given the overall support that AMD has given Linux over the years, it is great to see a little bit of that given back.

      What endorsement is that? AMD has been utterly piggish with respect to open source. GCC still produces awful optimizations when targeting any AMD chip, and in fact has gotten worse between 2.9x and 3.x. Intel started out contributing pgcc when Linux was still in its infancy, and code output for Pentiums has gotten successively better. When bad optimization can halve your effective computation rate, that I think speaks volumes.

      That said, I have to agree with Linus on this one. Itanium would be a disaster for free compilers, as heavily encumbered as it is by compiler technology patents. And when it comes down to it, I'm not all that certain I want my general purpose language compiler generating what is effectively microcode anyway.

      IMHO of course.

      • Well said...this is NOT a push for AMD but a push for interoperability of 64 bit devices.
        The worst of possible outcomes would be Palladium on Itanium. New DRM hardware that requires all new code....a farking nightmare.
  • by Kenja ( 541830 )
    is the same as the problem with OS/2. People dont want to re-write their applications for native support. I expect very few apps to be codeed for Hammer because its 32bit compatiblity is so good. An application devloper can write for old 32bit x86 and target Hammer and x86 at the same time. Just like devloeprs could once write applications for Windows 3.1 and have them run on Windows and OS/2. Not that the CPU wont do well, but I dont expect it to ever get the kind of support it wants.
    • Yeah, that whole 16 bit to 32 bit transistion never happened either. People just didn't want to update their code.
    • But there are relatively few changes which need to be made to make a program run in 64-bit, and those changes don't cause problems with running it 32-bit. So people have the choice of their code running more slowly, but working everywhere, or running more quickly on Hammer, and running everywhere. Unlike with OS/2, you're not making two different versions for the different platforms, at the source level at least.

      Of course, you're going to have to have Hammer-specific binaries to use 64-bit. But generating them is just a compiler issue, and software comes out for different processors all the time, when it doesn't require code changes.
    • But it's an even bigger problem with IA-64. At least Hammer will do a good job of running IA-32 code, which IA-64 doesn't. All Hammer needs is a 64-bit OS that can load both X86-64 and IA-32 code, and it's off and running. For that matter, all it really needs is to be available, because it'll simply look like a faster Athlon with no other changes.

      There's a horse-race happening, because IA-64 is here and X86-64 isn't. But IA-64 is currently stuck squarely in the high-end server market where HP and Sun live. The real horserace is between price drops on IA-64 and announcement, availability, and uptake on X86-64. X86-64 is a natural for the workstation market, but it's got to get there and move into an unfamiliar setting dominated by boxmakers more familiar with Intel than AMD before IA-64 prices drop enough to make it viable, there.
      • by ivan256 ( 17499 ) on Tuesday July 30, 2002 @12:01AM (#3976201)
        More has to happen then an IA-64 price drop. (Or more has to happen to cause an IA-64 price drop, depending on how you look at it.) IA-64 is a beast. It's a HUGE chip that drinks power. The system I used last used more power then any of my kitchen appliances except for the oven. That has to be fixed. The CPU with the fixin's has to cost less then just the power module probably costs right now. Then there's intel not letting anyone actually build systems with Itanium in it. They white box the systems, and let vendors rebrand them. That's not going to go over well forever. You have to wonder what intel is hiding that they won't let OEMs build boards and systems though... What dirty little secrets does Itanium hide?

        The second problem is that it's proprietary. Yes, proprietary, just like Power 4 and PA-RISC. Intel bills it as open, but if you want open you should go Sparc, MIPS, Alpha (dead soon unfortunatly), or x86. Those are the architectures that have competitive vendors manufacturing the cores. People write all kinds of software for x86. Not just desktop applications. Itanium can't get that kind of support if only Intel makes it. You'll see X86-64 in embedded devices right out of the gate. There are manufacturers DROOLING over a low power 64 bit chip to stick in their storage boxes and database servers. You won't see Itanium in there.

        You have to wonder wether there are two different companies over at intel. You've got the Pentium 4, which is basically driven by the marketing department, and is a huge marketing success, but the architecture is nothing to write home about, and generally lame in the innovation department. Then you have the Itanium, which is a big grown up microprocessor that was driven by the engineers, and is going to turn out to be a marketing failure. Oh well.
    • If your code is already 64-bit clean, you can probably just recompile it and it will run on Hammer's 64-bit mode, probably faster than it was running in 32-bit mode.

      So here's an upgrade path: first you buy your Hammer box and run your 32-bit Linux on it, just treating it as a faster Athlon. Then you upgrade your Linux to a 64-bit kernel, getting a speedup, but you can still run your 32-bit user processes. For the apps you have source for, you can recompile and run faster in 64-bit mode. Eventually people will start shipping Hammer binaries.

      One interesting question is what the speed advantage (or not) will be for 64bit mode. Increased cache footprint of 64bit pointers vs 64bit math, extra registers and PC-relative addressing. Hard to call.
  • by krog ( 25663 ) on Monday July 29, 2002 @03:36PM (#3973428) Homepage
    it's an internet tabloid creating a mountain ("Linus himself is praying that AMD wins!") from a molehill (half a sentence in an unrelated USENET post).

    crap story.
    • by tcc ( 140386 ) on Monday July 29, 2002 @07:23PM (#3975006) Homepage Journal
      Well I guess it depends on which point of view you are looking at it.

      Carmack posts something here, you get instant linkage and stories out on most gaming sites pointing to that post.

      Gates says something, everybody jumps on his speech and tries to analyse everything up to the point of what he had for breakfast, and his intentions for the next 20 years.

      Jobs farts, mac users are all exited, etc etc..

      The idea here is some people follow this stuff religiously, while for you it might be pointless for some others they really dig that stuff. Tabloid are way more crappy and unreliable than this story, and the worst? They sell like hotcakes.

      To give you an example, I've found slashdot by a linkage of an amiga story. While I am not a Linux freak or "your rights online" active militant, I do have my own "tabloid" stories that I like to follow (like amiga stuff for example).

      I've had the same reaction when I saw the article ("my, talk about far-fetched") but when you go and read the usenet post, it can make you think. If you don't care about linux and/or processors/os, well, you skip the story and move to the next, if you do like the hardware/OS scene, it makes a nice read, to get back to my idea, it tells you that if Linus wants the x86-64 to win, maybe they are designing the transmetta's next gen on that instruction set?, maybe this maybe that. Nevertheless, for people who like that kind of stories, it's a bit above the tabloid I'd say, because it's not a quote out of context and it's authentic.

      my 0.02c.

  • Not surprising... (Score:4, Insightful)

    by tjansen ( 2845 ) on Monday July 29, 2002 @03:37PM (#3973433) Homepage
    Not surprising... he works for Transmeta, and they licensed x86-64... So what else should he say?
    Beside that, who cares for the CPU's instruction set? Nobody, except compiler designers and very few assembler programmers. And they already know x86 and the tools exist. So the only argument for Itanium can be performance/price. And ATM it looks like Opterons will be cheaper.
    • Re:Not surprising... (Score:5, Interesting)

      by Anonymous Coward on Monday July 29, 2002 @03:58PM (#3973606)
      Beside that, who cares for the CPU's instruction set? Nobody, except compiler designers and very few assembler programmers.

      Umm, you are correct, but you have to keep in mind that Linus Torvalds is in that set of very few people. He may not be the one writing the compiler himself, but he is extremely close to the compiler-- he works on the operating system kernel, the one position that has to be most sensitive to obscure conditions of the microprocessor in order to optimise. Which is why he is weighing in on this subject.

      Anyway, most of us have heard of Torvalds' fondness of hand-writable assembly language. (I.e., the huge portions of early Linux that were written in x86 assembly and C code which is written in such a way it may as well be assembly.. I had heard things indicating he had mostly grown out of that lately, though, now that non-x86 platforms are closer to his chunk of the kernel tree.. is that the case?) And i think we are all well aware of Linux's famed nonportability to non-GCC compilers due to dependence on obscure GCC bugs and nonstandard features. So, yeah. Linus may not be The Compiler Guy, but he will definitely have to be talking to the Compiler Guys on a regular basis, and he is in the group of people (the linux kernel development team) most likely to be the first ones to run into trouble with any new bugs which crop up in gcc. So he definitely has a good reason to have an opinion on this subject, especially given the subject increases compiler complexity so much that it is somewhat likely to increase the number of small compiler bugs that make no difference to you or i but huge amounts of difference to those persons who know what "spinlocks" are..

      - super ugly ultraman
    • by emil ( 695 ) on Monday July 29, 2002 @04:27PM (#3973813)

      If AMD is successful in forcing Intel to adopt x86-64, great harm will be inflicted upon:

      • HP-UX
      • VMS
      • Tru64 (what is left of it that is rolling into HP-UX)
      • To a lesser extent, AIX-5L

      While recent interviews with HP execs (on The Register) indicate that HP is taking some steps to "roll with the x86-64 punch," I sincerely doubt if HP can be convinced to port VMS to Opteron should it become necessary.

      What is even more troubling for the Itanium is the fact that HP's compilers are faster than Intel's, but the HP compilers have not been released outside of HP-UX. The standoffish attitude of other ISVs (Dell, IBM, etc.) is not hard to understand given these circumstances.

      You will also have noticed Microsoft's (now infamous) "leaked" memo on Windows-64 running on Opteron. Such a leak I believe has been carefully crafted to throw FUD upon all things Itanium. Furthermore, it is in Microsoft's best interests for Opteron to prevail, as such a victory will destroy not only DEC/Compaq's high end, but also HP (as much as HP-UX deserves to die, it should not fall to Microsoft).

      If Intel and HP truly want Itanium to flourish, Intel must reduce the price immediately (to at least a SPEC-to-SPEC match with Athlon/Opteron, and possibly lower), and HP must release fast compilers under an open license.

      If the Itanium market remains fragmented, AMD wins, and Microsoft's interests are advanced.

      • *sigh*

        Itanium is not competing with Hammer or any other chip from AMD. It would make no sense for Intel to reduce the price of Itanium to less than an unrelated product.
      • HP don't actually have much of a problem, because Itanium is basically HPPA 3.0 with a bunch of x86 emulation stuff tacked on. HP have, in effect, gotten Intel to underwrite the development of their next-gen RISC architecture and hype it as the next big thing.

        In a scenario where Itanic is a failure (ie ends up in a niche as a midrange only CPU), HP-UX and VMS are in much the same position they were before - running on an expensive niche CPU.

        AIX still has POWER 4/5, so IBM don't care.

        The people who are screwed are the people porting their OS to what could become an HP-only chip.
    • oh and microsoft will support 4 64bit archs (-;

      AMD -> x86-64
      Intel -> IA64

      to quote http://www []

      so what would it be surley not Alha as thats end of life and not PA-RISC

      that leaves MIPS PowerPC and ?


      john jones

    • by Courageous ( 228506 ) on Monday July 29, 2002 @04:41PM (#3973933)
      Beside that, who cares for the CPU's instruction set?

      A bit ironic, that remark. That's basically what the AMD guys decided when they went for X86-64: that the instruction set really didn't matter, and that it was implementation and good ole' Moore's Law that really counted. Meanwhile, when the instruction set doesn't matter, you've got Intel spending a cool $10 bill on theirs. So, I have to say, I find your remark quite amusing.

  • Just wishing... (Score:3, Insightful)

    by Chexum ( 1498 ) on Monday July 29, 2002 @03:37PM (#3973434) Homepage
    I see only Linus daydreaming about keeping x86 (the well-known and optimized standard bytecode at Transmeta, remember?), so that the 64 bit extensions get more widespread, thus "rest of us" can afford to get 64 bit architectures on this very same architecture we grown up with... On the other hand, it's a good goal :)
    • Re:Just wishing... (Score:3, Insightful)

      by gmack ( 197796 )
      I think hes dreaming about 64 bit for the masses.
      As would anyone else who has had to get 32 bit x86 to handle more than 4 gig of ram or tried and figgure out how to juggle the few registers provicded as efficiantly as possible.

      I for one am also wishing for cheap 64 bit.
  • by Marx_Mrvelous ( 532372 ) on Monday July 29, 2002 @03:38PM (#3973438) Homepage
    Considering that Linus is almost fanatical about needing to "break" backwards compatibility in the Linux kernel in order to develop it as fast as possible.

    Now he's supporting a CPU scheme that, well, doesn't break anything and may even sacrifice performance for that compatibility.
    • by gmack ( 197796 ) <> on Monday July 29, 2002 @03:52PM (#3973556) Homepage Journal
      That is _only_ true for module interfaces. In the past hes been very picky about changes that break userspace.
    • by Dominic_Mazzoni ( 125164 ) on Monday July 29, 2002 @03:53PM (#3973558) Homepage
      Now he's supporting a CPU scheme that, well, doesn't break anything and may even sacrifice performance for that compatibility.

      Except that it's quite likely that an Opteron will be faster than an Itanium for most real-world tasks. At the very least it looks like it will be comparable in speed, and cheaper. If the Itanium really was screamingly fast, that would be different.
  • This isn't really going to add to my slashdot karma, but that article said nothing new over what linus mentions in his usenet post. Complete waste of a slashdotting IMO.
  • by Anonymous Coward on Monday July 29, 2002 @03:40PM (#3973454)
    You might want to change the title of this story to "Hoping for Hammer to Win." Who's praying? Ever heard of the separation of church and state? Jesus.

    Atheists are the last group of people who are still acceptable to oppress.
  • by Zooks! ( 56613 ) on Monday July 29, 2002 @03:42PM (#3973475)
    It's amazing that somebody could make such a relatively long article from what amounts to one sentence in Linus's email!

    Reading the Linus's email it seems that he wasn't endorsing one way or the other. He was just hoping x86-64 became dominant since it would stave off some issues related to how pages were handled.

    Apparently, if things go the Itanium route then some page related things get more complicated but that's it.

    Nothing to see here. Move along.
  • by markatwork ( 132554 ) on Monday July 29, 2002 @03:43PM (#3973482)
    Now I lay me down to sleep.... I pray Intel the IA-64 Instruction set to keep. But if Intel folds before I wake, I pray AMD picks up their stake (of the market).

    (OK so the last part sucks, but still ....)
    • I prefer this prayer.
      THIS IS MY KERNEL. There are many like it but this one is mine. My kernel is my best friend. It is my life. I must master it as I master my life.

      My kernel, without me is useless. Without my kernel, I am useless. I must compile my kernel true. I must debug faster than the proprietary who is trying to FUD me. I must outperform him before he outperforms me. I will...

      My kernel and myself know that what counts in this war is not the bogomips we reach, the cpus we scale, nor the filesize we achieve. We know it is the core dumps that count.

      We won't dump.

      My kernel is human, even as I, because it is my life. Thus I will learn it as a brother. I will learn its weakness, its strength, its modules, its vm, its #defines and its compile time. I will ever guard it against the ravages of binary modules and IP claims. I will keep my kernel efficient and ready, even as I am efficient and ready. We will become part of each other. We will...

      Before God I swear this creed. My kernel and myself are the defenders of my PC. We are the masters of the proprietary. We are the saviours of my life.

      So be it, until there is no proprietary, but Free!

      Ok, ok, I admit it needs tweaking.
  • no AMD vs. Intel (Score:5, Informative)

    by reverse flow reactor ( 316530 ) on Monday July 29, 2002 @03:44PM (#3973492)

    Maybe I misinterpreted the original post, but I thought that this had more to do 64-bit vs. 32-bit (and the limitations of a 32-bit platform) than it has to do with AMD vs. Intel.

    The kernel compiles on so many different architectures, but with most of them being 64-bit (PPC, sparc, MIPS...). However, i386 is the dominant architecture by sheer numbers. To maintain crosss-architecture compatibility, the code has to support the lowest quality architeture (i386). By pushing towards a 64-bit architecture, the limitations of 32-bit can be left behind (oh yeah, but the nasty issue of backwards compatibility).

    Unless I just misinterpreted the post.

    • by willy_me ( 212994 ) on Monday July 29, 2002 @04:41PM (#3973930)
      Even if desktop PC's migrate to 64 bit in the next couple of years, you still have all the other embedded devices out there running on 32bit CPUs. There is no need for these devices to use a 64bit CPU - for these applications 8megs of memory is plenty, 4gigs is just crazy!! This is why 8bit CPUs (and even some 4bit) are still in production today and in much greater quantity then those 32bit CPUs found in desktop computers.

      If linux is to be used in such devices, it'll have to support 32bit architectures.

      PS, PPC chips are 32bits. IBM Power chips are 64bits but they are actually different from PPC chips. Code written for one doesn't run on the other - something the Mac rumor mongers simply don't understand with their "Apple is going to use a IBM Power CPU" bs.
      • Even if desktop PC's migrate to 64 bit in the next couple of years, you still have all the other embedded devices out there running on 32bit CPUs. There is no need for these devices to use a 64bit CPU - for these applications 8megs of memory is plenty, 4gigs is just crazy!!

        32 bit CPUs may be here for a relatively long time after 64bit gets absorbed into the desktop, but forever? Even though a given embedded application may not *need* a 64bit CPU, economies of production and fabrication suggest that it may be *cheaper* to use a 64bit CPU as chip makers are likely to make more of them and less 32bit CPUs.

        It's like B&W teevees -- I don't need a color TV in my kitchen, a B&W one would do, but I'll be damned if I can find one. It seems that they're all color.
        • Even though a given embedded application may not *need* a 64bit CPU, economies of production and fabrication suggest that it may be *cheaper* to use a 64bit CPU as chip makers are likely to make more of them and less 32bit CPUs.

          The economies of scale arguement actually work against you. You're assuming that there will be more CPUs for PCs then embedded devices. You're wrong, the embedded market is much larger then the PC market. For example, a person might own one PC. Great, but they also own a printer, digital camera, television, VCR, automobile,,, the list goes on. All these devices use embedded CPUs and don't require access to over 4gigs of memory. Since it costs more to make a 64bit CPU, these devices will continue to use 32bit CPUs. In this market, a price difference of a couple of bucks is enough to create a custom CPU. And will a TV perform better with a faster CPU? I think not.

      • IBM Power chips are 64bits but they are actually different from PPC chips. Code written for one doesn't run on the other - something the Mac rumor mongers simply don't understand with their "Apple is going to use a IBM Power CPU" bs.

        Read IBM's own tech specs on the POWER4, it does the POWER ISA, PowerAS, and PowerPC. They are not mutally exclusave. The PowerPC added a bunch of single pression FP, and dropped (or made implmentastion dependent a bunch of DP and other stuff they didn't think a Mac needed). I think the PowerAS has some stuff for using *huge* address spaces (useful for a capability baised system), but I don't know that much about PowerAS.

        I don't think any affordable Mac is going to use the Power4, but Apple could do it for a hig end server, something like the X serve, but maybe 5 times the cost (since the POWER4 CPU is thought to be about twice the cost of the existing X serve!).

        I also have my doubts about IBM putting AltiVec into the POWER4 (the did licence it from Moto though), and some real doubts about whether Apple would build a high end system with an AltiVec-less CPU.

  • How ironic...the architecture of Dorian Gray, with endless bags on the side added over the past two decades (a long time in computer years)--only made to run reasonably by internally interpreting it on the fly into something decent and executing the result...and technically knowledgeable people are praying that the latest flogging of the dead horse is successful? I know, I'm guilty of it, too, because I want Intel to lose, but how strange that Intel's doing something right, namely starting over, is so universally deprecated (vide the "Itanic" nickname common on websites like The Register).
    • Because they aren't doing it right. I used to like their idea too until I started to see the design.

      They are planning for the high end only and demanding that things that are best known at runtime compilor. And they aren't even doing that consistantly. The result is a design that manages to be outrun by the Pentium IV and makes even the Athlon seem cool running.

      It reminds me of how Microsoft handled Microkernel design where they managed to have the disadvantages of the micokernel combined with the disadvantages a monolithic kernel because they tossed anything speed critical(like video) into the main kernel(bypassing the Micokernel interface).

    • Starting over is only "right" if you replace the status quo with something better. Instead, Intel replaced it with something *worse*. If Itanium performed, geeks would be all over it. It doesn't.
    • Actually, a trawl through comp.arch suggests that various luminaries are reasonably comfortable with Opteron - it's less an "x86 with bits bolted on" and more a "what would I do if I wanted a nice 64 bit CISC chip that happened to be compatible with ia32 instructions"; a number of older, cruftier bits of the ia32 have been deprecated.
  • by Wizri ( 518731 ) on Monday July 29, 2002 @03:47PM (#3973506)
    Hey Linus,

    What should I drink?

    Thank alot,

  • Clarification (Score:5, Insightful)

    by KidSock ( 150684 ) on Monday July 29, 2002 @03:49PM (#3973522)
    First, this was not a USENET post. It's a message from the linux kernel mailing list that google is pumping into their database. Second, Linus is not saying he thinks Hammer is a better architecture. What he said in this message was that the current Linux page table implementation is not ideal for use with IA64 and therefore, for the sake of Linux servers everywhere, it would be better for Hammer to provail in the near to medium term future. I don't know his real position, but I would be very surprised if Linus though Hammer was a "better" architecture. X86 is an awkward instruction set that has been perpetuated by software designed to run on it. The core of these chips like Pentiums are really RISC chips with hardware wrappers to implement the X86 instructions. So it's just a waste if die space. IA64 is purer and a much better long term choice. Don't over analize a simple e-mail message from someone on lkml. These are not markedroid approved public service announcements.
    • Re:Clarification (Score:4, Interesting)

      by roca ( 43122 ) on Monday July 29, 2002 @04:31PM (#3973854) Homepage
      Have you compared the die size of Itanium vs a Xeon or Hammer? Itanium is much larger --- and slower --- and more expensive. Who's wasting die space?

      But hey, at least it's pure!
      • Re:Clarification (Score:3, Insightful)

        by Ralph Wiggam ( 22354 )
        The parent poster went out of his way to say that IA64 was his choice in the "long term" and didn't mention Itanium once. He was talking about instruction set design, and you rebut with specific CPU implementation. Yes, Itanium is a giant monster of a CPU that will probably fail on most levels. But don't judge an instruction set based on the first implementation out of the gate. They'll get a lot better at it in the near future. The Pentium Pro was a big disaster (not a new instruction set, but new core). The engineers learned from that experience and came back with some damn fine chips.

    • Re:Clarification (Score:5, Informative)

      by Waffle Iron ( 339739 ) on Monday July 29, 2002 @04:33PM (#3973880)
      The core of these chips like Pentiums are really RISC chips with hardware wrappers to implement the X86 instructions. So it's just a waste if die space. IA64 is purer and a much better long term choice.

      Except that two CPU generations from now, Intel will have had to change the underlying architecture of the IA-64 chips to get performance improvemets, but they'll have to leave the instruction set compatible. So, they'll have a hardware wrapper around the IA-64 instruction set. And this wrapper is going to have to try and second-guess the output of those rocket-science IA-64 compilers and rewrite the results on the fly.

      Why not just leave well enough alone and let the CPU rewrite code from today's simple, well understood compilers? The current x86 instruction set works like a bytecode VM. There's nothing wrong with that, especially since the IA-64 CPUs and compilers haven't exactly been blowing away the x86 chips in the performance area.

  • by awptic ( 211411 ) <<moc.xelpmoc> <ta> <etinifni>> on Monday July 29, 2002 @03:49PM (#3973524)

    For anyone who has an hour and a half to spare... AMD (along with a few people from SuSE) made a great presentation on the X86-64 technology at the Linux kernel summit in Ottawa a little while back; the MP3 and OGG files are available at the sourceforge kernel foundry [].
  • Linus seems to be more concerned with the wide-range functionality of the specific hardware than the "brand" of it. Making Linux work with x86-64 looks to be easier than making it work "properly" (eg with fully 64-bit page sizes, addresses, etc) with IA64. Then again, IA64 is so broken and slow, it really doesn't matter much in the grand scheme of things if they can make a little go a long way with the Hammer. These small deficiencies the counterpoint poster to Linus makes reference to don't seem to be necessary to make things work..

    Regardless of who's winning the CPU war, it's nice to see that Linux is running on all the competitors.
    • That is the problem.. there *are* no 64 bit page sizes.. if there were Linux would support it a *lot* more efficiantly. What you really have is a 32 addressing range and the abillity to swap pages into the 32 bit addressing range as needed.

      What we really have is EMS all over again.
  • by Jeppe Salvesen ( 101622 ) on Monday July 29, 2002 @03:49PM (#3973527)
    I thought we supported this stuff for the other 64 bit processors? Aren't we fully 64-bit yet?
  • We're not moving to a 64-bit index for the next few years. We're a lot more likely to make PAGE_SIZE bigger, and generally praying that AMD's x86-64 succeeds in the market, forcing Intel to make Yamhill their standard platform. At which point we _could_ make things truly 64 bits (the size pressure on "struct page" is largely due to HIGHMEM, and gcc does fine on 64-bit platforms).

    It sounds to me like he's praying for standardization of the 64 bit architecture, not the success of the AMD Hammer.
    • by Junta ( 36770 ) on Monday July 29, 2002 @04:07PM (#3973675)
      Yes, he's praying for standardization, using AMD's standards which is directly related to Hammer's success. Itanium was to be Intel's one and only 64-bit future, and only when faced with AMD's 32-bit backward compatible architecture did they design a fallback, the Yamhill, which would be compatible with the Hammer and legacy apps. The headline is *not* misleading at all, for once, he wants AMD's Hammer standard, not IA64.

      It seems odd though. Putting aside market situation and prices to look at the pure technology aspect of it, IA64 is a better architecture, it isn't burdened with backwards compatibility. Especially with linux (which already works with IA-64, as well as most apps), there is little reason to hold on to the dated IA32 architecture, which inherits stuff from early 80s. I could see why MS would be on the x86-64 bandwagon (if users' upgrade paths force them to change architectures, they may be just as likely to go PPC as they would IA64), but not Linux...

      It made sense when moving to 32-bit from 16-bit to keep backwards compatiblity, assembler was widely used back then out of necessity, and thus porting applications was non-trivial. Now, in an age where most everything is written in high-level languages, this is the perfect opportunity to start with a clean slate. Companies can easily recompile and do additionaly testing and earn back the money it cost to do so in short order, if their application is important to the market.... Of course all of this is from a technological standpoint....

      The fact of the market is that Yamhill/x86-64 is the future of x86. Itanium was a nice dream and all, but when you look at the two platforms and the variety of software they support, the choice is clear. Not everything will be ported to IA64 and knowing that it is hard to justify the jump...
      • Actually, I think the reasoning probably came from the fact that Hammer is entering into the mainstream market (as an Athlon Pro, or whatever) rather than into the server-only market, like Itanium. Hammer really is going to be the first mainstream 64-bit chip. If Hammer really catches hold, then x86-64 specific OSs may come out (Linux will happen quickly, so quite a few servers will probably be running x86-64). At that point, they will not run on a P4, which still has a future ahead of it.

        Yamhill is a P4 derivative, not an Itanium derivative (of course) - if x86-64 becomes market-important, then Intel will release Yamhill into the mainstream, and then the mainstream will be mostly x86-64, rather than IA32.

        He's talking about market segments, not architectural benefits. He never said IA64 sucked.
      • > look at the pure technology aspect of it, IA64 is
        > a better architecture

        No. There are better, cleaner architectures than x86 --- MIPS, Alpha, PPC. But IA64 is not one of them. Static scheduling simply doesn't give you the performance, not with today's compilers, not with tomorrow's, probably not ever. Some things really are done better in hardware.
        • True, but if if there *had* to be a choice between x86 and ia64, I would think ia64 would be better.... And your point is exactly why I don't get Intel's strategy. They sacrifice their head start and now get evaluated 'fairly' against competing architecturs like PPC, MIPS and Sparc. They control Alpha and PA-RISC is being flushed. MIPS is pretty much earmarked for embedded apps by the market now (Irix systems aren't doing well, are they?), so the 'major' players in the workstation/server arena are PPC, Sparc, and Intel architecture... All other things equal, PPC looks really tempting in particular....

          Of course, MS could make all the difference, unfortunately....
  • by MORTAR_COMBAT! ( 589963 ) on Monday July 29, 2002 @03:50PM (#3973534)
    it's funny how people ripped and ripped and ripped on Intel all through the 90s about keeping all their backward compatibility from 286 on through the P4. how people said they should cut the dead weight, etc.

    well, now AMD is creating the kruftiest, heaviest, nastiest instruction set of backwards-compatible crud in the history of processor-dom. Intel comes out with a new, no-legacy 64-bit instruction set, and all of a sudden it is, "god, we hope AMD wins so all our old crap still works".

    well anyway, here's at least one programmer who is looking forward to getting his mitts on a 64-bit chip which doesn't have layer upon layer of backwards compatibility, wrapped in an overpowered muscle-car of silicon. you'd think we would have learned our lesson with the Alpha, a much, much better chip than the x86 but no one adopted it. people scream and bitch and moan about supporting all the ancient krufty x86 bloat, but when it comes time, they stick with what is comfortable.

    more than likely, Intel's 64-bit offering will follow the road of Alpha into technical superiority and market disaster. and we'll be stuck still supporting 286 instructions. way to go.
    • OTOH, it makes some sense to keep things the way they are.

      Consider that the internal core of a perfect-new RISC chip and a x86-64 chip are more-or-less the same. x86-64 instructions come in, are translated to internal RISC code, and are then executed. The main difference is an extra translator and the register-renamer. But any architecture that lasts long enough will need such trappings, as it starts being used for things that nobody would have thought of when the designer thought the chip up.

      Remember that, for a long time, the 286 instructions that aren't easily mappable to the RISC core aren't particularly efficent.

      I used to think exactly the same thing as you are thinking now. I want a MIPS or Alpha inside, not Intel. But, given that 99% of programming is not done in assembley and the cost of adding a hardware instruction set translator is minimal compared to the difficulty and risks of switching instruction sets, the instruction set of a processor ceases to matter.
    • Compare the size of the Itanium 2 die with the dies for AMD's Hammer chips. You will soon see who has the real muscle chip. (Hint: Itanium 2 will be FOUR TIMES larger than AMD's Clawhammer --- and it will still be slower.)

      Getting rid of cruft is a good move if it lets you get higher performance. But IA64 destroyed that potential performance gain with several idiotic design decisions. That shiny new no-legacy instruction set may give you a warm feeling but that's all you get.

      Now, Alpha was a nice architecture. If Intel had invested in Alpha the way they've invested in IA64, they would have left every other CPU in the dust. Too bad.
  • by 3seas ( 184403 ) on Monday July 29, 2002 @04:03PM (#3973651) Homepage Journal
    Don't you see it comming.....all the way down to hardware...on one side the DRM and such products and the other the open systems
  • by Second_Derivative ( 257815 ) on Monday July 29, 2002 @04:06PM (#3973672)
    I'm not some prodigal kernel developer, but I do think the AMD architecture looks like a piece of shit. You're really telling me that we want to have an architecture that operates in a 16, 32 AND 64 bit mode, that has tons of crufts and kinks from the 80's still in it and a paltry handful of registers that are all overlaid... A(H|L) -> AX -> EAX -> 64-bit AX? Why? what the hell would that be good for? just bloats the die by another order of magnitude I'm sure.

    Intel's got a sound solution and they at least have the balls to finally give the cruddy old x86 architecture the heave-ho; ok they can't do it now but IA64's architecture does not require 8086 or IA32 to bootstrap it so both can be thrown out sooner or later. Regardless of what the actual metal might be, the actual platform is beautifully elegant next to x86 and will ultimately be a real asset in the future as 64 bit architectures become the norm, much more so than some short term gain that might be had by virtue of a superior implementation from AMD.

    Maybe I'm missing something here (OK I'm not on the design teams for both processors so I certainly AM missing something here) but from this standpoint, it looks like this would be the one time when I want to cheer for Intel as opposed to AMD. Pity they had to botch the development cycle like they did. *sigh*
    • by barawn ( 25691 ) on Monday July 29, 2002 @04:21PM (#3973770) Homepage
      You're right - from a theoretical standpoint. And, if all things were equal, IA64 should utterly rout x86-64.

      However, things aren't that equal. First off, x86 has had a lot of work thrown into it, and the current processors are quite good at implementing x86: I doubt there's a huge architectural penalty anymore - you can build virtually identical PPC and x86 computers and compare them, and even though PPC is a much better architecture, it's not going to blow x86 out of the water. Yes, it's idiotic to have, for instance, a stack-based floating point implementation, but the P3 and Athlon both make FXCH free, so it's not that bad anyway, and the P4's SSE2 implementation isn't bad, so using SSE2 instead of x87 is a decent compromise.

      Ars Technica ( actually has a good writeup of why we should stop treating x86 as this bastard dog of an instruction set, although they mostly relied on the fact that we have a huge installbase of x86 software.

      Honestly, I doubt x86 decoding seriously bloats the die that much - jeez, on a 0.13u process, how big would the original 8086 core be? Take a look at the die for a Hammer processor - x86 decoding doesn't take that much space.

      Just wait and see, that's my answer. Let the benchmarks prove AMD or Intel wrong. Intel's really relying on the brilliance of compiler writers, whereas AMD's banking on tons of experience. We'll see who has a better strategy...
      • > using SSE2 instead of x87 is a decent compromise

        In fact, it's significantly faster. Latest gcc has a switch to do this.

        BTW, you're ignoring the fact that x86-64 is a significant improvement over x86, not just a 64-bit stretch. 8 new general purpose registers, 8 new SSE2 registers. It starts to look a lot like a real architecture, yet compiling to it is very little different from compiling to x86.
    • by roca ( 43122 ) on Monday July 29, 2002 @04:55PM (#3974048) Homepage
      > I do think the AMD architecture looks like a
      > piece of shit.

      You obviously don't know anything about it. In 64-bit mode Hammer gives you 16 general purpose registers (RAX, ..., RDI, R8, ..., R15) and 16 floating point registers (you are encouraged to do all FP using SSE2 and forget about the x87 crap). The GP registers are not overlaid (e.g., the 8-bit instructions access the bottom 8 bits of each register; AH,BH,CH,DH are not available). 64-bit mode is cleaner than x86 in other ways too.

      > the actual platform is beautifully elegant

      Unfortunately you can't run programs on the "actual platform". You can only run programs on the slow and expensive Itanium and Itanium 2.
  • by jpc ( 33615 ) on Monday July 29, 2002 @04:10PM (#3973689) Homepage
    If anyone actually read the lkml context, the remark was entirely in relation to the flood of recent patches making everything on 32 bit platforms support 64 bit sizes. Once upon a time it was just files over 2GB, then it was block devices over 2TB, now it is all sorts of shit because vendors are selling 32 bit machines that support 64GB of RAM.

    Now Intel of course just reckons that people should buy Itaniums if they want this (and apparently they did actually ship 250 of the Itanium 1...) but someone is buying these. Even though you have to use 32 processes in order to use the memory.

    Clearly these machines should be 64 bit, thats what Linus was commenting on. Then we could leave at least some of the limits for 32 bit machines without complaints.

    The other problem is non-atomic 64 bit ops on 32 bit machines, incrementing counters and such. This has caused quite a few problems recently.
  • In shocking testimony uncovered by The Inquirer, Linus Torvalds has publicly stated [] that the size pressure on "struct page" is largely due to HIGHMEM! This ground-breaking statement was a crushing blow for HIGHMEM fans, but received applause from struct page supporters. More information on this ground-breaking revelation as it unfolds...


  • I know his words are a little out of context, but that's already covered enough here, so I won't go into that part of things...moving on...

    I am the first to admit I don't totally understand the different 64 bit chipsets, that being said it comes as not suprise to me AMD has some advantages over the i64 offering. AMD has been blowing Intel away recently on many different performance levels. Intel has lost their quality advantage. Remember when people were taking a big chance buying an AMD machine back in the 486 days, or at least everyone thought so. You never hear about that now. A lot of the articles today tout the per megehertz speed advantage AMD holds over Intel. The gap has been so large lately AMD does the fake mghz labeling thing so comsumers can compare on a more apples to apples basis.

    Maybe Intel's time has come and the monopoly is on the verge of being broken. I for one would welcome it.

  • The only smart reason I see to go to 64 bit is when you need more then 4 GB of memory. The technology is not far enough yet outside the server/high end workstation market to require all that memory.

    Maybe next generation windows will waste more of my memory so I will need a 64 bit CPU.
  • These companies are both broken. What needs to be done is for a sacrificial lamb in the form of a dual chip release. First you create a broken kludge upgrade 64 bit chip that is backwards compatable with X86 (the sacrificial lamb), while at the same time (on another design) stripping all the old outdated stuff and sticking LOADS of cache and other optimizations on the new chip, but with 100% compatable with the 64 bit extensions of the sacrificial lamb. This would enable THOSE with brains to move right to the new 64 bit chips while those W/O brains could remain backwards (compatable).
  • Setting aside all the Linux kernel issues, Linus has a decidedly vested interest in the AMD part as Transmeta has already taped it out. So when he speaks of the kernel issues, keep in mind that his Transmeta stock options may speak loudly in his mind.
  • Intel's VLIW architecture is going to be a pain for compiler writers, greatly limiting the diversity of languages that's likely going to be available. It will probably do well on C and Fortran-based benchmarks, but whether it runs your or my code well is an entirely different question.

    I don't particularly like the x86 instruction set, but unless we all switch to Alpha or SPARC, x86-64 makes the most sense to me.

    • I don't think the Itanium ISA will "greatly limit" any of our language choices. The way I see it, at the very least we have two choices:

      1) Compile to C, and then compile the C to native code.

      2) Compile to the .NET CLR, which by the time Itanium takes off will be the standard development platform.
  • by GregAllen ( 178208 ) on Monday July 29, 2002 @04:31PM (#3973860) Homepage
    Linus passed gas yesterday.

    Droves of geeks were seen wafting in his wake, hoping to get a whiff.

    Must be a slow day for news.
  • by randomErr ( 172078 ) <> on Monday July 29, 2002 @04:33PM (#3973873) Journal
    Our Linus which art in Santa Clara, Hallowed be thy name.
    Thy kernal come.
    Thy will be done in desktops, as it is in servers.
    Give us this day our daily rpm.
    And forgive us our crons jobs, as we forgive our cron jobers.
    And lead us not into temptation, but deliver us from Microsoft:
    For thine is the kernal, and the power, and the glory, for ever. Amen.
  • In this article, the "Inquirer" wrote:
    Intel would have to license the X86-64 code from AMD, a fact that might stick in its craw more than just a little.

    I don't see why. Instruction sets don't generally seem to be protected by any law. Otherwise, AMD would have had to license the x86 instruction set, which I doubt they did (and if they did, Intel would be in a great negotiating position). Or the various IBM, PDP, and VAX clones would have had to license the respective instruction sets, which, again, doesn't seem to have been the case.

    In fact, in their own article on the Transmeta use of x86-64, which they reference, they wrote:

    Sources close to AMD said that Transmeta "licensing" the instruction set, which it did last May, meant no more than it had decided to work with the instruction set and there were no real conditions or limitations on use for X86-64 code.

    That means that Intel - which as we reported here some time ago has a "skunkworks" preparing a 64-bit backup plan - can freely use the AMD instructions to make a processor, if that's what it chooses to do.

    Seems to me that the "Inquirer" agrees that x86-64 doesn't require a formal license.

  • by Anonymous Coward
    Why does Torvalds care about some rap crap 'singer'?
  • Out of context (Score:4, Insightful)

    by p3d0 ( 42270 ) on Monday July 29, 2002 @04:57PM (#3974064)
    This is entirely out of context. Linus says that there are some problems in Linux related to implementing certain 64-bit things in certain ways, partly because of gcc bugs, and so he said the equivalent of "let's hope x86-64 wins because then we don't have to think about it".

    I wouldn't take this particular quote to be his definitive statement of preference for x86-64 over IA-64.

  • DEAR STEVE JOBS (Score:4, Interesting)

    by roca ( 43122 ) on Monday July 29, 2002 @05:05PM (#3974116) Homepage
    Please port OSX to Hammer and stick AMD chips in your Macs. You can save face by pretending it's not x86 (even though it will make customers happy when they can run WINE and VMWare on OSX). Your programmers will enjoy the relatively clean 64-bit mode. You won't face the risk of being the sole customer of your CPU vendor. Best of all, you will be able to make cheap Macs with competitive performance. I promise to buy one if you do it.

    Robert O'Callahan
  • First, Itanium is the marketing name for the processor. The architecture is IPF, or IA64.

    Second, it's anything but pure. It also has an IA32 (i386) compatibility mode, that kills any die size benefits of a new architecture, at least for the next few generations until IA32 really dies.

    Third, even when it gets rid of IA32 compatibility, IPF may still be a pig: many people who know more about this issue than me consider it to be too complex and full of bad trade-offs, essentially stretching a good idea (VLIW) too far (EPIC).

    There is the argument that RISC architectures are essentially better. Too bad IBM can't find its way to the general market, Motorola has only proprietary Apples as its venue, Sun falters in execution and forfeited popularisation, and Digital was killed by elitism.
  • by software_non_olet ( 567318 ) <> on Monday July 29, 2002 @05:21PM (#3974224)

    What Linus says is not as important as the fact that his words are spread and discussed all over the internet. That's proof that we don't have a one- or two-player game any longer (Microsoft plus Intel).

    It's an important power-shift, which took place. Now four players decide the further development: two OS- and two CPU- manufacturers. And to avoid deadly risks they need to be compatible to each other.

    Woopy! The market is getting back it's power!

  • by iabervon ( 1971 ) on Monday July 29, 2002 @06:35PM (#3974711) Homepage Journal
    He has nothing against the Itanium (in fact, Linux runs on the Itanium perfectly well). What he's hoping for is that Hammer takes off so the non-Hammer x86 market dries up and Intel goes to an Itanium/Hammer product line instead of Itanium/Pentium. What he's worried about is 32-bit machines with large memory and disk.

If you want to put yourself on the map, publish your own map.