Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Linus: Praying for Hammer to Win 487

An anonymous reader writes "The boys at Intel can't be happy with the latest opposition to the IA-64 instruction set. According to this Inquirer scoop, Linus himself has weighed in, and it appears he's putting his eggs in the x86-64 basket. In the original usenet post, he goes so far as to say that 'We're ... praying that AMD's x86-64 succeeds in the market, forcing Intel to make Yamhill their standard platform.'"
This discussion has been archived. No new comments can be posted.

Linus: Praying for Hammer to Win

Comments Filter:
  • no AMD vs. Intel (Score:5, Informative)

    by reverse flow reactor ( 316530 ) on Monday July 29, 2002 @03:44PM (#3973492)

    Maybe I misinterpreted the original post, but I thought that this had more to do 64-bit vs. 32-bit (and the limitations of a 32-bit platform) than it has to do with AMD vs. Intel.

    The kernel compiles on so many different architectures, but with most of them being 64-bit (PPC, sparc, MIPS...). However, i386 is the dominant architecture by sheer numbers. To maintain crosss-architecture compatibility, the code has to support the lowest quality architeture (i386). By pushing towards a 64-bit architecture, the limitations of 32-bit can be left behind (oh yeah, but the nasty issue of backwards compatibility).

    Unless I just misinterpreted the post.

  • by Jack Wagner ( 444727 ) on Monday July 29, 2002 @03:45PM (#3973497) Homepage Journal
    Quotes Linux Torvalds: "- the page cache works with index/offset, and that should be your first priority, since the page cache is all that matters from a performance standpoint."

    That's all well and good for litle Endian OS's, but since you are dealing with a static offset you have one extra instruction lookup for all big endian machines. Thus if you port Linux to Sparc or Alpha you not only see a performance degredation of O(logN) but you loose one register spot on the level II chache for the offest lookup. In other words it will be slower, much much slower.

    Warmest regards,
    -Jack

  • by awptic ( 211411 ) <`infinite' `at' `complex.com'> on Monday July 29, 2002 @03:49PM (#3973524)

    For anyone who has an hour and a half to spare... AMD (along with a few people from SuSE) made a great presentation on the X86-64 technology at the Linux kernel summit in Ottawa a little while back; the MP3 and OGG files are available at the sourceforge kernel foundry [sourceforge.net].
  • by mwarps ( 2650 ) on Monday July 29, 2002 @03:49PM (#3973525) Journal
    Linus seems to be more concerned with the wide-range functionality of the specific hardware than the "brand" of it. Making Linux work with x86-64 looks to be easier than making it work "properly" (eg with fully 64-bit page sizes, addresses, etc) with IA64. Then again, IA64 is so broken and slow, it really doesn't matter much in the grand scheme of things if they can make a little go a long way with the Hammer. These small deficiencies the counterpoint poster to Linus makes reference to don't seem to be necessary to make things work..

    Regardless of who's winning the CPU war, it's nice to see that Linux is running on all the competitors.
  • by Tall Rob Mc ( 579885 ) on Monday July 29, 2002 @03:49PM (#3973529)
    We're not moving to a 64-bit index for the next few years. We're a lot more likely to make PAGE_SIZE bigger, and generally praying that AMD's x86-64 succeeds in the market, forcing Intel to make Yamhill their standard platform. At which point we _could_ make things truly 64 bits (the size pressure on "struct page" is largely due to HIGHMEM, and gcc does fine on 64-bit platforms).

    It sounds to me like he's praying for standardization of the 64 bit architecture, not the success of the AMD Hammer.
  • by gmack ( 197796 ) <gmack@@@innerfire...net> on Monday July 29, 2002 @03:52PM (#3973556) Homepage Journal
    That is _only_ true for module interfaces. In the past hes been very picky about changes that break userspace.
  • by GutBomb ( 541585 ) on Monday July 29, 2002 @03:58PM (#3973607) Homepage
    finnish/swedish accent actually, but whatever.
  • by binaryDigit ( 557647 ) on Monday July 29, 2002 @04:02PM (#3973645)
    but really, what advantage does it have on the high end not offered by Power, Sparc, PA-Risc, etc

    Simple, the ability to run M$ operating systems (which the other chips no longer have). As long as M$ has it's weight behind the thing, then Intel will always have a significant advantage. Reasonable (though not stellar by any stretch) x86 compatibility also helps.
  • by barawn ( 25691 ) on Monday July 29, 2002 @04:21PM (#3973770) Homepage
    You're right - from a theoretical standpoint. And, if all things were equal, IA64 should utterly rout x86-64.

    However, things aren't that equal. First off, x86 has had a lot of work thrown into it, and the current processors are quite good at implementing x86: I doubt there's a huge architectural penalty anymore - you can build virtually identical PPC and x86 computers and compare them, and even though PPC is a much better architecture, it's not going to blow x86 out of the water. Yes, it's idiotic to have, for instance, a stack-based floating point implementation, but the P3 and Athlon both make FXCH free, so it's not that bad anyway, and the P4's SSE2 implementation isn't bad, so using SSE2 instead of x87 is a decent compromise.

    Ars Technica (www.arstechnica.com) actually has a good writeup of why we should stop treating x86 as this bastard dog of an instruction set, although they mostly relied on the fact that we have a huge installbase of x86 software.

    Honestly, I doubt x86 decoding seriously bloats the die that much - jeez, on a 0.13u process, how big would the original 8086 core be? Take a look at the die for a Hammer processor - x86 decoding doesn't take that much space.

    Just wait and see, that's my answer. Let the benchmarks prove AMD or Intel wrong. Intel's really relying on the brilliance of compiler writers, whereas AMD's banking on tons of experience. We'll see who has a better strategy...
  • Re:Clarification (Score:5, Informative)

    by Waffle Iron ( 339739 ) on Monday July 29, 2002 @04:33PM (#3973880)
    The core of these chips like Pentiums are really RISC chips with hardware wrappers to implement the X86 instructions. So it's just a waste if die space. IA64 is purer and a much better long term choice.

    Except that two CPU generations from now, Intel will have had to change the underlying architecture of the IA-64 chips to get performance improvemets, but they'll have to leave the instruction set compatible. So, they'll have a hardware wrapper around the IA-64 instruction set. And this wrapper is going to have to try and second-guess the output of those rocket-science IA-64 compilers and rewrite the results on the fly.

    Why not just leave well enough alone and let the CPU rewrite code from today's simple, well understood compilers? The current x86 instruction set works like a bytecode VM. There's nothing wrong with that, especially since the IA-64 CPUs and compilers haven't exactly been blowing away the x86 chips in the performance area.

  • by rodgerd ( 402 ) on Monday July 29, 2002 @05:07PM (#3974126) Homepage
    HP don't actually have much of a problem, because Itanium is basically HPPA 3.0 with a bunch of x86 emulation stuff tacked on. HP have, in effect, gotten Intel to underwrite the development of their next-gen RISC architecture and hype it as the next big thing.

    In a scenario where Itanic is a failure (ie ends up in a niche as a midrange only CPU), HP-UX and VMS are in much the same position they were before - running on an expensive niche CPU.

    AIX still has POWER 4/5, so IBM don't care.

    The people who are screwed are the people porting their OS to what could become an HP-only chip.
  • by putzin ( 99318 ) on Monday July 29, 2002 @05:48PM (#3974430) Homepage

    True, but everyone knows itanium here, but not so much IPF.

    And anyone who claimed that Itanium is "pure" was not too terribly well informed. Actually, what defines pure for a processor anyhow? I agree with your second statement.

    As for three, I think the jury is still out. Wait for open source competent compilers to be released (say 5 releases of EPIC GCC from now) before anyone really makes a claim as to good v. bad here.

    And finally, remember, desktop CPU's make up a very small percentage of total CPU's shipped. Motorola's biggest CPU customer is not Apple, but rather Motorola's Cell infrastructure and networking businesses. Then, they have other companies (Force, et al) reselling their embedded PPC chips as well. Intel makes a ton of embedded CPU's. These are the high volume chips that make their way into your cars, dsl routers, phones, cell switches, telephone networking equipment, and handheld comps that most take for granted. A huge chunk of processors shipped aren't even 32 bit (don't need more than 8 for many embedded apps!) so you're argument that RISC is bad doesn't really hold water unless the only CPU's used are desktop/server (less than 10% by some accounts of the total CPU market).

  • by stripes ( 3681 ) on Monday July 29, 2002 @07:21PM (#3974995) Homepage Journal
    IBM Power chips are 64bits but they are actually different from PPC chips. Code written for one doesn't run on the other - something the Mac rumor mongers simply don't understand with their "Apple is going to use a IBM Power CPU" bs.

    Read IBM's own tech specs on the POWER4, it does the POWER ISA, PowerAS, and PowerPC. They are not mutally exclusave. The PowerPC added a bunch of single pression FP, and dropped (or made implmentastion dependent a bunch of DP and other stuff they didn't think a Mac needed). I think the PowerAS has some stuff for using *huge* address spaces (useful for a capability baised system), but I don't know that much about PowerAS.

    I don't think any affordable Mac is going to use the Power4, but Apple could do it for a hig end server, something like the X serve, but maybe 5 times the cost (since the POWER4 CPU is thought to be about twice the cost of the existing X serve!).

    I also have my doubts about IBM putting AltiVec into the POWER4 (the did licence it from Moto though), and some real doubts about whether Apple would build a high end system with an AltiVec-less CPU.

  • by ivan256 ( 17499 ) on Tuesday July 30, 2002 @12:01AM (#3976201)
    More has to happen then an IA-64 price drop. (Or more has to happen to cause an IA-64 price drop, depending on how you look at it.) IA-64 is a beast. It's a HUGE chip that drinks power. The system I used last used more power then any of my kitchen appliances except for the oven. That has to be fixed. The CPU with the fixin's has to cost less then just the power module probably costs right now. Then there's intel not letting anyone actually build systems with Itanium in it. They white box the systems, and let vendors rebrand them. That's not going to go over well forever. You have to wonder what intel is hiding that they won't let OEMs build boards and systems though... What dirty little secrets does Itanium hide?

    The second problem is that it's proprietary. Yes, proprietary, just like Power 4 and PA-RISC. Intel bills it as open, but if you want open you should go Sparc, MIPS, Alpha (dead soon unfortunatly), or x86. Those are the architectures that have competitive vendors manufacturing the cores. People write all kinds of software for x86. Not just desktop applications. Itanium can't get that kind of support if only Intel makes it. You'll see X86-64 in embedded devices right out of the gate. There are manufacturers DROOLING over a low power 64 bit chip to stick in their storage boxes and database servers. You won't see Itanium in there.

    You have to wonder wether there are two different companies over at intel. You've got the Pentium 4, which is basically driven by the marketing department, and is a huge marketing success, but the architecture is nothing to write home about, and generally lame in the innovation department. Then you have the Itanium, which is a big grown up microprocessor that was driven by the engineers, and is going to turn out to be a marketing failure. Oh well.
  • by puetzk ( 98046 ) on Tuesday July 30, 2002 @12:15AM (#3976258) Homepage
    5-600W is a big range. How about some real numbers just for grins :-)

    As viewed from my 650VA UPS (which will tell me the wall power consumption, including all losses in the PSU etc) my dual athlon (2xMP1600) +17" monitor sits at approx 400VA load. When the CPU's idle to C2 (most of the time) it drops about 80VA, and if the monitor sleeps it dropsanother 110VA or so.

    So the fixed (ie PSU+HD+video+mobo+CPU idle) comsumption+etc is about 200VA, each CPU is about 40VA (different from C2 idle to max load), and the monitor is about 110VA.

    Making the (basically reasonable, though not perfect) assumption that a switching power supply's power factor is close to 1 (shouldn't be far from that I wouldn't think) 1VA=1W. If the power factor is not one, then a VA is less than a watt (ie, all my numbers are too high in that case).

    So it's a good heater, but not as bad as you feared. The lights in the room are using as much power as the idling computer, the computer edges them out during a good quakin' session :-)

Today is a good day for information-gathering. Read someone else's mail file.

Working...