Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Leads MS in Itanium Support 129

lizrd writes "The New York Times is reporting (yeah, yeah, you gotta sign to read it) that several Linux distros will be shipping stable versions of Linux for Intel's new 64-bit Itanium chip on the day that it is released to the public. Microsoft however will not be supplying a version of Windows for Itanium until sometime in the fall of next year, several months after the expected May release of the new processor."
This discussion has been archived. No new comments can be posted.

Linux Leads MS in Itanium Support

Comments Filter:
  • by aussersterne ( 212916 ) on Friday December 22, 2000 @10:50AM (#543194) Homepage
    What if the first few batches of silicon are utter, buggy crap? If Linux is the only native OS out there for Itanium, will Linux perhaps get blamed for or at least associated with all of the problems that can occur in a new platform?

    What I'm saying is, what if Itanium+Linux=crap just gets shortened to Linux=crap in the minds of some folks, even though the shaky new Itanium platform was really at fault?

  • Think about it guys. Since when has or will Microsoft lead the OS market in CPU utilization?

    One of the funniest things I ask people when they think that NT is great is that I ask them:

    If its so great, then why does it only run on the bottom of the line hardware?

    Yeah there for a while NT did run on an Alpha, but that was basically unsupported by both M$ and 3rd party vendors.

    Oops, A reply to my own message. Yes M$ _DID_ try to utilize MMX when Intel 1st came out with it and "exclusively" at that. However, I still think that my Win2k box with a 1Ghz processor is SLOW! So I guess the bloat has drastically overcome any optimizations they have done. Also, Intel was a witness in the Govnt case regarding this MMX scam.
  • Unfortunately, it's mostly irrelevant that Linux will be available first. How many hardware vendors are going to actually develop systems to use this chip and market them without Windows? Will Dell accept the overhead of carrying Linux-only hardware? I don't think so.

    On the other hand, this may be a chance for VA Linux to have a product that you can't buy elsewhere.
  • Being first to support a chip doesn't say much. Apple was first with a graphical OS but Windows has trounced them. I don't know how stable the Itanium builds of Linux are, but the most important thing to consider is how much support the OS has. Sure, Linux has a lot of support on slashdot, but the cold hard facts is that Windows has a lot of support for servers. Lots of people will still blow of Linux and wait for the Windows ver to come out. To see whether Window will flop with their Itanium configuration, you first have to wait to see how many Windows Itanium packages are sold when it arrives. I'd like to see Linux succeed and all. But don't make assumptions because of a lack of information.
  • by Anonymous Coward
    I have a IA64 system as well, and have been working on the 64 bit FreeBSD OS for Itanium. The reason that the GCC compiler is so far behind Microsoft (in terms of stability) is because the new 64-bit technology is still a moving target. The people at Intel need to get their heads out of their asses and make up their minds on what they want to put in this processor (I.E. they have added and removed the new JK-111 turbo cycles for multimedia programs *three* times in the past six months because they didn't think it was ready). It wasn't until last week that they decided whether or not this would be a CISC based or RISC based chip! And they expect us to be ready when the chip rolls out next June! My God, they don't even have anything for us to work with!

    As for stability issues, I think most of those have been fixed in the CVS version of gcc, which is due to be released next month (you can cvs it now though, btw).
  • The 1/2/2001 issue of Red Herring magazine has a cover-page article titled Limping Giant -
    The much-delayed Itanium chip, Intel's most ambitious project in 32 years, is frustrating the chip maker's plans for life after the PC by Dean Takahashi.

    I summarized the article as follows:

    * Itanium is 2 years late
    * Itanium was not ready for its official October launch event
    * Itanium delay has:
    o benefited Sun and frustrated HP and SGI
    o created investor doubts
    o reduced Intel's 4Q profit
    o resulted in management shakeup
    o Albert Yu was "the first head to roll"
    * Intel has been diversifying but uP is still 75% of revenue
    * Itanium is key to maintaining high uP margin
    * Itanium delay has been attributed to:
    o loss of manic focus on chip design
    o loss of engineers to communication chip makers
    o poor execution
    o frequent spec changes to keep pace with competitors
    o departure of Lew Platt, Richard Sevcik, David House
    o layout delays caused by off-shore design group
    o an over-inflated design team (300 to 500 engineers)
    o overspecialization of circuit designers
    o difficulty in coordination
    o high number of inexperienced engineers
    o high engineer turnover rate (30%)
    o frequent abandonment of large blocks of finished work
    o a committee that demanded approval for everything
    o declining engineer morale
    o bad blood between Intel and HP engineers
    o changes in target manufacturing process
    o failure to manage chip complexity

  • Are you stupid?
    No.
    MS support what makes them money. NT on Alpha wasn't profitable so it got dropped. Incredibly they have chosen to support what is by far the most popular chip architecture. Amazing!

    Got a cite for the MMX bit? It sounds like bullshit. Using M$ rather than MS doesn't do any favors for your credibility.

    Yup, this google search [google.com] picked up some 1,250 results. And this article [techweb.com] titled "DOJ Says Microsoft Bullied Intel" is a lame, but existant example.
    Slow on 1Gz? Worthless anecdotal opinion, but a few possible options to explain it none-the-less: 1. You don't have a clue what you are doing with W2K. Probably fucked something up trying to 'tweak' it. 2. You are running loads of services and programs in the background. 3. You're confusing 'is running slow' with 'menus take a while to appear'. 400ms is the default delay. Search the registry and tweak (oooh) the delay. 4. You have 64Mb or less RAM. It's cheap, buy some more.
    Is it? No I don't have a clue as to why my box with 256 Megs of ram (and 1gig page file) runs out of memory when I click on the Microsoft Intellimouse Configuration. And the only "tweaks" that I have done is to stop unneeded services, and turning off window animation, speeding up menu's to 0ms delay, etc.
  • ...there are things besides the CPU we're forgetting about here. I'd like to know what kinda stuff is needed in order to even run a mobo that utilizes the Itanium. What features does it offer. Is it going to require the cursed Rambus? My hunch is that it will since Intel stated that they're going to give Rambus another try. It's been proven that it's no better then the cheaper alternatives. What other kind of hardware will work on this. Will your current nVidia Geforce work? I think, big deal, the Itanium is coming out, are there going to be totally new requirments in order to run a totally new platform?
  • I got from the previous poster's comment, not that MS doesn't have Itanium boxes to test on, but that Linux would have been tested in real world cases by those that have taken the risk of implementing some system with new Linux/Itanium boxes.

    Real testing comes with real use, not in labs.

    Although MS could do real testing if they move something like HotMail onto Itanium machines. But I would doubt they would do such a thing before a release.

    Steven Rostedt
  • The fact that the /. slogan is: "News for nerds. Stuff that matters" has NOTHING to do with groundbreaking news..
    These news are not supposed to be "right out of the factory"...
    /. is a community to help geeks (LAZY GEEKS) find the news in one place, and one place only, then comment about those news, and finally go get more news so the wheel doesn't stop. IF YOU WANT "NEW" NEWS, then go somewhere else.
    Ppl come here to share ideas, and to bring news, not to take them right out of the oven... although some ppl like you, come here to whine.

    BURP!
  • BTW, I wasn't karma whoring!
  • emacs can do this. Meta-slash.

    There should be some kind of published document detailing the statistics of any mention of an IDE degenerating into a pro-emacs-flame thread. :p
  • I don't know how all those Linux zealots can disagree except for being a zealot. I've got a box that I've tried to install three different linux's and two BSDs. I would normally discount it as a WEIRD BOX except for Win95, WinNT4.0 and Win98 all have installed fine (yes, twenty reboots later, yes pain in the ass, but THEY INSTALLED!) In fact, I now install Win98 first so that I can find all my addresses / interrupts before I install Linux.
  • Sun has an Itanium strategy? News to me. Please clarify.
  • "Not if they are running Whistler."

    What part of "first Itanium boxes must have Linux to run" didn't you understand?

  • No, I'm a guy. And I'm straight, not that that really matters. But doing anything I set my mind to is a comment on any stereotype. In my case, it's about being a geek not stopping me.

    Back when I was in high school, people used to bother me about it. I'm starting to sound like Jon Katz now, so I'll stop, but I hope you get the point. I'm still a bit sensitive about it. (:

  • by mlong ( 160620 )
    I'm not sure how this is news. TurboLinux announced a beta [cnet.com] in March.

    I also find it ammusing that Compaq had completed porting Tru64 over in April of 1999 [compaq.com] but decided to drop the port. It took them only 4 months to do the port.

  • Actually, WinNT on Alpha ran in 32-bit mode IIRC.

    Also, I've had a chance to use a Linux-powered Itanium system for a while. Not as impressive as it could have been, since GCC is not up to snuff so far in dealing with instruction-level parallelism (a very high percentage of the output instructions were noops). It can only get better though.
  • How will they be known as stable versions of Linux if they're shipped on Day zero? Stability on a few half dozen lab machines? That doesn't automatically translate into robustness on thousands of boxes after wide deployment.
  • it is my understanding that linux currently supports the itanium
  • You were doing installs on machines where NT had no significant problems.

    The people who do run into problems are using hardware where there are problems.

    For example: I can pretty much state that you weren't trying to do a multi processor install on a machine with an embedded Adaptec SCSI chip. If you did that you would find out very quickly what people were complaining about.

  • It remains to be seen if the Itanium is really where personal computing is headed.

    Perhaps the PC industry has always remained stuck in the x86 rut not out of choice, but because it couldn't escape.

    The important software was from Microsoft. i.e. DOS. Win 3.1. Win 9X. It only runs on x86 processors. In recent years the very latest MS OS's could run on other processors. Only this year does MS even have an OS (Win2K) that they could port to other processors, like Alpha, MIPS and PPC. AFAIK they haven't yet. So even with MS's capability to become processor neutral (nevermind the software packaging issues that arise) it will just take time, and demand for things to slowly move out of the x86 rut.

    I've often thought (when not distracted by cute guys needing tech support) that if all major software were processor neutral, and the over the counter software packaging issues were solved that the hardware designers would be truly free to innovate once again instead of trying to remain compatible with not only 20 year old hardware, but misdesigned 20 year old hardware.

    If initially non x86 processors are expensive, it's just supply and demand. So we may have a serious inertia problem to get unstuck from the x86-only rut even if the other impediments are removed.
  • If you had actually read the article, you would know that several Linux distros are shipping RELEASED, STABLE versions for the Itanium, while what Microsoft has is an unstable(aren't they all?) version that won't be released to the public till a long time after the Itanium release.
  • Microsoft's compiler is a total joke.

    If this is true, and I'm not disputing it; then I find it simply amazing.

    If MS is keeping their eye on the ball then they should realize the importance of having a good compiler, development tools, etc. I don't use VC, but I know people who swear by it (not at it) and seem to think MS has invested huge resources into it. So why not good fundamental compiler technology?

    Well, geez, with MS's resources, couldn't they just hire some gorgeous compiler dudes?
  • Ohhh...extra tools and services - gee...then in addition to registry tweaks, they put a check box in the install saying "Do you want server tools and services"

    Big fuckin deal.
  • One descipency I'd like to point out. MS has "had"
    Windows NT ported to 4 differnt architectures since release 3.5 and was actually developed on a RISC processor. An intel vintage, I forget the model. From there it's underpinnings where easily moved to Alpha , PPC, and mips. FYI Microsoft STILL does all their Win 64bit development on Alphas even though they dont support it anymore. I've been told by employes at microsoft of labs FULL of XP1000's... Doing nothing but win64.

    Peter
    --
    www.alphalinux.org
  • Actually, in spite of what I wrote, there are cases in which Linux is an easier install than Windows. For instance, today I upgraded my dual boot Win 98/Mandrake 7.2 laptop to Windows Me/Mandrake 7.2.

    The Mandrake partition works as great as ever, but Windows Me now cannot see my CD-ROM. It says that the Secondary IDE controller (dual fifo) is defective. I used one of my two free support calls to MS on that one, hope they send me the drivers I need.

    So in this instance Mandrake was less of a hassle than windows. I doubt that my next Mandrake upgrade will break my CD-ROM access.

    Even so I still remain convinced, based on being a regular user of Windows and Linux, that Linux has more gotchas than does Windows, although Mandrake deserves credit for what I think is the easiest-to-use Linux installer yet.

  • And to whose credit do we owe this great lump of knowledge, Mr. AC?
  • Ahh, yes, I remember the last stable Microsoft processor. I believe it was something like MSMouse. Does anyone else remember that processor?

  • So MS would wisely wait a few weeks or months to make sure their OS installs 99+ percent of the time.

    But immediately upon the release of a new processor, just how great of a diversity of different types of hardware do you anticipate there to be?

    Me: not that many. Distro vendors could likely support them all well. It's not like trying to support the diversity of legacy x86 boxers. MS has a huge collection of knowledge about such boxes -- often boxes which no longer even have documentation available, giving them a tremendous advantage autodetecting older hardware. And they've had a long time to perfect their autodetection techniques. (Well, polish, not perfect.) But for the future Linux gets to start off on a more level playing field. As time marches on and legacy boxes go to heaven (or hell, maybe only non-x86 boxes go to heaven?) this only works more and more to Linux's advantage.
  • Win2k isn't all that much better. It's an order of magnitude more stable and scalable than NT but installation is still a major pain, and despite what the M$ flacks will tell you there are still blue screens galore.

    What a complete pile of bullshit. It is remotely possible that you're running 2000 on a complete POS PC in which case perhaps it's possible that either faulty hardware or crap drivers (for example ATI still hasn't figured out how to make stable video drivers), however BSOD's are incredibly rare on 2000, and on any variant of NT 4 as of SP4. While my job is software architecture, incidentally I run several NT4 machines that are heavily loaded with web, file, and database services and they are only ever rebooted when serious security fixes come out (in which case it isn't a big deal to install and reboot in off hours. If it was a 24/7 machine I'd have it clustered and it still wouldn't be an issue). My development machine has never BSOD'd in 2000. EVER. And I guarantee I beat my machine as hard or harder than anyone out there. Again if you run POS hardware well...

    Is Windows 2000 perfect? Hardly. explorer.exe crashes on me every now and then. Actually on a GeForce MX equipped machine it crashes at a frequency that I would term "often" : Maybe once a day. Ctrl-Alt-Del...find explorer.exe...kill process....Run "explorer.exe". Only harm is that ICQ no longer has the cute little system tray icon.

  • Yes, the BSODs I've seen are mainly ATI.DLL related.

    I guess the problem is that ATI's shoddy work shouldn't be in a position to be crashing the machines. That would be a design decision if I'm not mistaken..

    I've had good experiences with NT4.0 AND Windows 2000, as well as bad. I have a 2000 Server running on crappy (think i586) hardware and it doesn't complain in the slightest. The uptime has been better than I expected. In a previous life I also had good experience with an NT 4 Server that ran pretty much 24/7 with occasional prophylactic reboots. But even in those successful cases, occasionally restoring from backup has been much more than a theoretical exercise.

    But for every success story I've had, there have been the headaches. The installations that just wouldn't go right, or the machines that just don't seem to want to play nice with the operating systems. It's surprising to me that a commercial-grade product like Windows 2000 is so damned touchy.

    The primary point here is that the INSTALLATIONS are not very much fun. Linux installations, on the other hand, have been pretty good to me. Certainly it's anecdotal evidence and mileage varies, but that's how I have seen it.
  • I guess the problem is that ATI's shoddy work shouldn't be in a position to be crashing the machines. That would be a design decision if I'm not mistaken..

    The same design decisions are taken with Linux where any number of device drivers are fully capable of taking the ship down with them. While it would be nice to have a fully microkernel architecture ala QNX where nothing but a tiny segment of core code can take down the machine and things like video drivers can be killed an restarted, the performance hit was unwanted so Microsoft made what I'd consider a pretty reasonable compromise and decided that as of NT 4 they'd integrate several outer ring drivers in ring 0. The idea is that the vendors would be expected to pursue a microkernel type system in their drivers (i.e. not sticking everything in the DD), and what they do put in there is super-ultra-mondo checked. I have found that Matrox, for example, does an awesome job and they have excellent video drivers. ATI on the other hand has single handedly, IMHO, given Windows NT/2000 a bad reputation. I like ATI however they need a serious shit kicking for the crappy, unstable drivers that they put out. This is all anecdotal and is IMHO.

  • Aren't the ATI drivers certified by M$? They come on the CD...
  • ...on the proven technology front it can only be good for Linux to be ahead...

    It also helps refute the "chasing taillights" argument.

  • by Throw Away Account ( 240185 ) on Friday December 22, 2000 @09:09PM (#543229)
    Perhaps the PC industry has always remained stuck in the x86 rut not out of choice, but because it couldn't escape

    Er, no. Transitioning to NT for Alpha with FX!32 was no more technically difficult than the 68k-PPC transition. The reason that the PC industry has "remained stuck in the x86 rut" is because it didn't have a dictator like Apple to force a transition, and because RISC isn't magically superior.

    Software designed and optimized for a consumer-grade desktop RISC chip doesn't perform any better than software designed and optimized for a consumer-grade desktop x86 chip. Relative "elegance" is irrelevant -- there is no material benefit to a switch.

    Nor is it an effect of market position giving higher-volume consumer chipmakers an advantage -- otherwise a top-of-the-line Motorola chip would certainly outperform a top-of-the-line AMD chip, given the 1995 positions of each company. But the Athlon manages to kick the consumer-grade PPC's ass anyway.

    All the bullshit about RISC superiority is just that -- bullshit. Inertia is an excuse, not an explanation.
  • by Anonymous Coward
    (Posting as AC so Intel doesn't take my IA64 boxes away for talking too much) I've been developing on/for IA64 hardware for several years now. The hardware is not ready for release. Linux is not either, no matter what Turbolinux claims. I wouldn't say whistler/64 is behind Linux. It seems quite a bit more stable than Linux on multiproc machines. I'll probably get modded down for this but the stability and maturity of the MS compiler is way ahead of GCC for IA64. C++ support is one area where GCC for IA64 is severely lacking. Check out the mailing lists on linuxia64.org to keep up to date and find out where you can help.
  • Actually, during one of the development cycles, MS threatened to stop working with Intel and work only with AMD.

    Intel does work with other OS's, but that makes the assumptions that other companies actively work with Intel. With the greater public knowledge / work on Linux, there are now people that work on that particular segment.


    yours,
  • by Bill Currie ( 487 ) on Friday December 22, 2000 @10:55AM (#543232) Homepage
    The beauty of Linux is that it doesn't have to target just the server market (what is Linux anyway? Surely not Red Hat?:). Those that are interested in developing Linux's server capabilities do so, as with those interested in the desktop. There is no conflict of interest/resources because one side is more or less unavailable to the other anyway (ie, someone interested in working on Linux's desctop prospecs will not, in general, be well suited to working on the server aspects). And when you get kernel specific, improvements on one side generally help the other.

    There are a lot of hightly specialised Linux distros out there (check out the distros page on lwn.net [lwn.net]) with all sorts of uses (routers, terminals, servers, workstations, ...). There is no real shortage of resources for Linux developement. When something new is needed, it's generally done by the group that needs it which is often a new group, not an old one abandoning their previous project.

    Bill - aka taniwha
    --

  • by sheldon ( 2322 ) on Friday December 22, 2000 @10:56AM (#543233)
    How backwards compatible is the Itanium?

    The article mentions it contains a 32-bit section which allows it to run older x86 instructions.

    What I'm wondering is if the reason for the new versions is just to take advantage of the new 64-bit world, or if you could actually just install say Windows 95 onto it and live crippled?

    Just kind of curious how important this OS battle is to the adoption of the processor.

    I'm not sure what kind of market demand there will be anyway. I don't see most computers today as being CPU-bound.

  • What a complete pile of bullshit. It is remotely possible that you're running 2000 on a complete POS PC in which case perhaps it's possible that either faulty hardware or crap drivers, however BSOD's are incredibly rare on 2000. My development machine has never BSOD'd in 2000. EVER.


    Well, just so you know, I'm running Windows 2000 Professional on a Dual-CPU P2 box, stuffed with goodies like UW-SCSI and redundant regulated powersupplies. And still it crashes. You're right however, there is no BSOD anymore, the box just freezes. What an improvement!

    And no, I'm not doing crazy things on the box, I'm just running Office and AutoCAD for engineering tasks.
  • by Pink Daisy ( 212796 ) on Friday December 22, 2000 @10:56AM (#543235) Homepage
    While having Linux running on Itanium is great, it isn't really surprising. Linux already runs on other 64-bit architectures, so the porting was probably easier than for Windows, which IIRC ditched Alpha some time ago, and has not supported anything other than x86 for some time.

    It's not really even significant. I doubt there are going to be a tremendous number of Itanium sales next year, anyway. It's nice that early adopters use Linux, but not Windows, but not very significant.

    The more interesting question is about gcc. How is support for Itanium coming with gcc? The EPIC architecture probably requires a lot from the compiler to take good use of it. I assume that gcc *does* support Itanium, since Linux is running on it, and porting Linux to another compiler would probably be more effort than porting it to another platform that gcc targets.

    If Microsoft has a significantly better compiler, Windows will probably be a much better system for Itanium. I've heard of Intel's involvement with gcc, so I doubt that MS will do much better, but still, support is just a baby step in the battle for the best system.

  • Nothing!

    The only thing of interest is if Intel actually releases and pushes the chip in any way before MS is ready, if so then perhaps we can safely say WIntel is dead. If they hold off anything other than a "you can get it now" release then it's another point in the split MS argument.

  • Good lord man, what are you smoking? Can I have some? Have you ever tried to install windows? It's the biggest pain in the ass in the world. Far far more difficult than Debian or RedHat, especially on machines w/ more than one OS.

    It took me over five tries, and I still don't know how it works, but I got NT4 Server, 98SE and Debian all running on the same harddrive. NT4 and 98SE HATE eachother!

    Even just install NT4 gives me problems half the time. It will do it's god-awful install process, fail to recognize my ethernet card, finish anyway, and tell you to reboot it. Then you reboot it and it hangs. If it doesn't hang then it installs for another five minutes and tells you to reboot it again. To get it installed with all my drivers installed took 8 reboots on NT4, which is no fast thing. It took roughly the same on 98SE. Took me ONE on Debian.

    Oh, and the eth card that NT4 has trouble with is a really off-the-wall card. It's a 3Com 3c905b. Oh, wait, thats one of the MOST COMMONLY USED CARDS EVER! Linux loves it. 98 Loves it. Why not NT?

    Anyway, installing NT4 is the hardest part of my tech-support job too. It's really a pain. I am SURE that getting it to install on EVERY computer is NOT their big concern. NT4 is the most fickel install I have ever witnessed, well aside from earlier versions of linux... :-)

    Justin Dubs
  • Seems to me the person was reading 3 different slashdot stories and made one response to try to fit all three.

    Hey, you never know with some people.
  • Actually they are already being tested by companies in some form. Being first out of the gate is meaningless since it won't be adopted. Better to wait until its out and ship something that ppl will use.
  • by Greyfox ( 87712 ) on Friday December 22, 2000 @11:27AM (#543240) Homepage Journal
    I forsee UNIX being the most popular Itanium platform, but I don't foresee Itanium being all that popular anytime soon. What it really buys us is an awareness on the parts of the suits that 64 bit computing is popular. Once some Dilbertesque Mangeroid decides he needs one of those "newfangled 64 bit thingies" the engineers will have a much better chance of pitching (insert your favorite 64 bit chip here.) So though Intel may not end up being the 64 bit chip of choice, they may end up being the catalyst which draws us all into the age of widespread 64 bit computing.

    Has anyone moved up to a 128 bit processor? DEC had a good long head start, having introduced a 16 bit procesor back in 1970 (My DEC assembly language book hypothesizes that a 32 bit chip should enable faster processing but would be prohibitively expensive to make :-) but they haven't kept that margin, though I seem to recall that they have a great wide memory bus.

  • by xmurf ( 236569 )
    What was that thing about yo' mama again?
  • Ok, accepting what you say (which I do) the question is has Intel made any active attempt to prevent themselves being bullied in a dominated market (as you say the developers felt)? If two monopolies (and let's face it Intel and MS have been undisputed monopolies for many years, though this may finally be breaking for both) work in completely dependant markets (Intel must have software and MS must have hardware) and never tread hard on each others toes than they should be viewed as one monopoly. Have we seen MS keep an OS in the wings on another chipset in case Intel shit on them.....NO they dropped NT Alpha(and lets face it if Intel dropped i386 chips their would be a long pause while the competitors picked up the slack). Does Intel actively support another major OS to ensure that if MS try to move to Mac/Arm/Crusoe hardware their plans can still be in some way useful....NO.

    How many companies would allow themselves to be 90%+ dependant on one other company? Where is the insurance......it's this fact that makes Wintel "alive" whatever the PR speak and official lines.

  • AMD has an incoming 64-bit architecture which will be binary compatible with the x86 (much like the 386 coud run 8086 software unchanged). I'd put my money on them over the Itanium. Haven't read much about it lately, other than a Slashdot announcement awhile back. Anyone has further news on that?
  • I think inertia is more of a reason than you give it credit for.

    I don't think RISC is a magic bullet, but I think that MHz for MHz it can result in better overall performance. I say this based on two things: the actual performance comparison between 68K Macs and PPC Macs. (Even when most of the system software used to be running in emulation!)

    Secondly, Apple's excellent technical explanation of why they were moving to RISC. It moves the complexity out of the hardware and into the compiler back end -- which only has to happen once. The short explanation being that if each CISC instruction were to be translated directly into a short sequence of RISC instructions, then you get to schedule the order of execution of those RISC instructions in the compiler. By rearranging the execution order of instructions, you effectively have multiple of what was the CISC instructions executing overlapped. (i.e. part of the next CISC instruction has already started in some of the functional units prior to completion of previous CISC instruction.) Instead of putting extra hardware on the chip to do so much optimization, you end up putting the extra transistors into more functional units that do actual work. Without really good compiler technology, RISC is worthless.

    In the case of the x86 specifically, I think this applies even more so. You spend those transistors on functional units instead of on misdesigned 21 year old legacy instructions and modes.

    Finally, I think Intel has managed to squeeze so much extra life out of the x86 because they were able to pour buckets of money into it's development. Disproportionantly so. And the heat problems x86's had compared to equivalently clocked PPC's.
  • Considering word size is different, there's obviously some sort of emulation going on.

    I couldn't give you the details, but if it was all that simple, Linux/MS would not NEED to release a new version for it, would they? They'd get around to an optimized version later.

  • What a complete pile of bullshit. It is remotely possible that you're running 2000 on a complete POS PC in which case perhaps it's possible that either faulty hardware or crap drivers (for example ATI still hasn't figured out how to make stable video drivers), however BSOD's are incredibly rare on 2000, and on any variant of NT 4 as of SP4.

    I got a BSOD on my Dell poweredge running 2000 on the first day, with only the software that I got from Dell on it. It was a different blue screen from the one NT often serves up, but it still was there. Later I installed Mandrake 7.2 on it and it hasn't crashed since.

    I will say one thing for it though, now when explorer crashes (all the time) it at least usually restarts it and not much harm is done. Finally MS has an OS that is actually usable, if not quality.

  • by jafac ( 1449 ) on Friday December 22, 2000 @12:48PM (#543247) Homepage
    The whole POINT of the "PC REVOLUTION" was in "commodity hardware". That is, get tons of manufacturers churning out parts that ran to the PC spec set forth by IBM (forth - um, open firmware joke, get it? nevermind).

    The point was to develop this cheap-ass piece of junk platform to the point where people didn't need to pay extortionate fees to Sun, DEC, SGI, HP, Intergraph, and the mother-ship, IBM, etc. Now, DEC is gone, HP is just another Packard-Bell, so is Intergraph, and SGI, is acquisition fodder. Only Sun and IBM really remain as strong players. I'm guessing that has nothing at all to do with the PC revolution, and more to do with the Internet revolution and the need for bulletproof servers.

    Until Intel got a monopoly in chips (AMD was a nice try, but are they REALLY positioned to harm intel? Last I checked, intel was still dictating platform standards) - it was an open platform and the dream was alive. Someday, there was going to be a beefy and robust PC that could replace expensive minis at commodity hardware prices, and run an OS grandma could admin. Then Intel figured out that with a monopoly, they wouldn't have to compete with any other players, they could set the standards, and block this insanity from happening. Sure, they'll still be productin commodity hardware, but they'll be using the enterprise pricing model. And using their IA-32 market dominance to crowbar Itanium into the enterprise server market, no matter how inferior it is, technically. If it runs Win32, it's golden. No matter how overpriced it is. No matter how much laughter it generates when placed next to REAL enterprise hardware.

    It's called market segmentation. The Celeron/Xeon thing was a small-scale application, and proof of concept. Look at the technical difference between Celeron and Xeon. Then look at the price difference. You could put a Xeon in a desktop machine, and benefit, but the price made it not worth it. Granted, Itanium will have a big technical difference - PCs DO need to go 64 bit to be serious in the enterprise server market. But they need MUCH more than that - in a practical sense, less performance for more $? Crazy. That's market segmentation. A tool designed to artificially constrain supply in the marketplace, to drive up prices, while not suffering from constrained supply (and high costs) on the manufacturing end. The results? Pure profit. Bring lots of vaseline.
  • IA-64 chips were to have a hardware emulation unit, not unlike how PPC chips run 68k code.

    This sounds very unlike how PPC chips run 68k code.

    The Mac OS is what runs the old Mac OS 68K code, not the PPC. And it is done entirely, transparently, and seemlessly in software emulation. It is so seemless and transparent that even experienced users never notice it or think about it. But it is all software. No hardware.

    Offtopicness follows....

    In fact, 68K code can call system routines written in PPC, and vice versa. In fact, when Apple first introduced the PPC (March 1994) the vast majority of the system code was still all written in 68K. They only rewrote the most performance critical parts of the system. (Inner loops of QuickDraw graphics blitters, font rendering, etc.) Apple documented all of this quite well. As you look at newer and newer Apple docs over time and newer software releases you can see that they did gradually move the system to a completely fat (i.e. both x86 and PPC) based code, and now to a PPC-only world in the latest software releases.
  • You didn't try it on the ASUS motherboard I did. It took me 6 hours and three retries before I figured out that the drivers on the CD were defective - and that it was necessary to use the updated drivers on the diskettes to boot from. Nothing but Blue Screens when using the CD's.

    After I did the install an MCSE wiped the disk and spent 3 days trying to do the reinstall. (NT Server). He refused to believe me when I told him what was wrong. After he gave up a third party then spent 2 weeks trying to do the same install. After I wrote down the procedure (which involved doing more than just booting from the diskettes - you had to install even newer drivers from the ASUS web site - and yes I tried to install them with a CD boot - Nothing but Blue Screens) he was able to get it to work also.

    Nothing 'piece of cake' about it - it was a major hassle.

    As I said - you were working with equipment which didn't have any problems with NT - the other people weren't.

    And no - the instructions on the ASUS web site didn't mention using the diskettes. Or even that there were other drivers to install.

    When you are a young smart ass with little experience everything looks easy to you. You assume that other people are just stupid - when the fact is that you just haven't been presented with the same set of problems that they have seen.

    When you run into a rip roaring bitch of an install you won't be quite so much of a smart ass anymore. One of the great lessons of life is that it knocks the arrogance out of you.

  • -64 chips were to have a hardware emulation unit, not unlike how PPC chips run 68k code
    If I understand you correctly, you are saying that the PPC's have 68k emu support built in? They don't. Apple ported about 10% of the OS (then 7.5.5)to native PPC, then the rest was in emu. I know OSX will be full native, but I think OS 9 is mostly (90%+) native.

  • Itanium is key to maintaining high uP margin

    If there is a single reason why Itanium may utterly fail -- this may be it.

    Especially if there is any decent competition.

    Monopolies are not aclimated to an environment of competition (fat, out of shape) and also are addicted to high margins, and are always looking for their next fix.

    It vaguely but not exactly reminds me of the PS/2. Expecting the entire industry to just adopt your new product that is engineered primarily to maintain your high-margins and little else. But the comparison here is not perfect -- so don't flame. Regardless, in that case, the rest of the world said (in the lingo of the era) "just say no".
  • by Gryphon ( 28880 ) on Friday December 22, 2000 @10:35AM (#543252)
    No login required!

    http://news.cnet.com/news/0-1003-200-4236527.htm l
  • I forsee UNIX being the most popular Itanium platform

    If your crystal ball is working properly, then maybe Itanium could end up being more dependant on Unix than anything else will ever be dependant on Itanium? Perhaps a healthy change from the "wintel" situation.

    What if hardware engineering were someday to be driven by the needs of the OS (and ultimately users) [with Unix in the drivers seat] rather than driven by crap such as MMX, and campaigns to convince people that they need it?

    Nahhhh. Won't happen. As long as Windows also runs on it, new "engineering" "innovations" will appear, that both MS and Intel will be happy to promote to support their mutual high margins. And Linux will look like it is playing catchup if it doesn't support these "innovations".
  • by Tin Weasil ( 246885 ) on Friday December 22, 2000 @10:35AM (#543254) Homepage Journal
    Okay, so Intel is about to ship it's Itanium, and Microsoft doesn't have an OS to power the new architecture. But Linux is ready to support the new chip...

    ...hmmm. I just don't see a problem here.

  • (Not, however, first post.)
  • I wonder if this is at all a negative for the Wintel corporate structure

    Careful now! The judge could just decide to split the Wintel monopoly into two seperate corporations.
  • Early on (8086,286) Intel licensed its designs for others to fabricate. They did this to keep up with growing demand for PCs since their fabs were already at full capacity and they made extra money from licensing. However, the relationship between Intel and the processor cloners turned from cooperative to adversarial once the cloners started to undercut Intel's prices.

    After releasing the 386, Intel tried to delay licensing the design in order to establish a lead in the high end. However, their licensing agreements (signed before the 386) limited what they could do to stall Cyrix and AMD, resulting in fierce price competition in the 386 market and the cloners gained quite a bit of market share. Intel's licensing agreement with Cyrix ended after the 386, forcing Cyrix to reverse engineer the 486 in order to clone it. Meanwhile, they delayed licensing the 486 to AMD. So, with the 486, Intel established a lock on the high end of the market. However, the 486 was the last processor to be cloned.

    From the Pentium on, Intel has not licensed the designs to any of its competitors, forcing AMD, Cyrix, et. al. to pursue their own independent designs.


    :::
  • by Smitty825 ( 114634 ) on Friday December 22, 2000 @10:35AM (#543258) Homepage Journal
    Note the CNET in the URL. It means the the NY Times just reprinted CNET's Original Article [cnet.com]...at least you don't have to sign up for regristration!
  • It's nice that Linux and NT have been ported to those processors. Many applications will benefit from that. Having a 64bit address space available greatly changes how you can approach many problems in OS design and implementation. To take real advantage of 64bit (and beyond) processors, we should have new approaches, not just old code widened.
  • Since GCC is an open source project. Intel working with the GCC maintainers also assists Microsoft indirectly, since GCC is open software.

    MS can observe the design of GCC and write their own code to mimic certain elements, if they desire to do si.

  • Guys, Microsoft has been working on an IA-64 version of Windows for a long time. Do you really think that Linux had a head start over Microsoft in porting to IA-64? You are deluding yourselves.

    Intel has been working on Merced/IA-64 for nearly a decade, since the early to mid-1990's. Microsoft has been kept abreast of IA-64 development since 1995 (at least!). Heck, Microsoft engineers even had input (as far back as '95/96) as to the design of the IA-64 instruction set.

    Think about it: It's 1995 and Intel is designing a new processor architecture, a very risky and expensive undertaking. For this design to be commercially successful, Intel is going to need support early-on from the top supplier of OS's. Who is the dominate OS developer in 1995? Microsoft, duh. (Linux was not on Intel's radar screen in 1995; it's rise to popularity was still a few years off.) Don't you think Intel would have called up Microsoft in the mid-1990's to evangelize this new architecture and solicit Microsoft's feedback on it? (And sign one helluva NDA too :)

    Now, Microsoft knows in 1995 that Merced silicon is a looong way off. But it needs to get started making the Windows NT kernel 64-bit clean. So Microsoft needs an existing 64-bit platform to port to. What do you think they did? (*cough* *cough* unreleased 64-bit Alpha port *cough*).

    Now, we know that Microsoft has an upcoming new release in 2001 called Whistler. Why release an IA-64 version of Windows 2000 in May 2001 (assuming that's when IA-64 ships), then follow that up with IA-64 Whistler just a few months later? That's too much work (and the market is going to be too small anyways). Why not just skip straight to IA-64 Whistler?

    So it may look like Microsoft is behind on IA-64 Windows, but in reality they're just trying to synchronize two major product schedules.

  • Don't worry. It already is shortened in the minds of many folks. If you saw a crew member on a star trek episode hand-editing text files in various scattered locations you wouldn't think that the future has too much to offer.

    When will people realize that you can soup up an old muscle car that you bought for $500 as much as you want but hour-for-hour, pound-for-pound, it still wasn't a good value, because you have to spend HOURS upon HOURS trying to get it to do what modern cars accomplish with ease. It's hard to believe that not every linux zealot is into S and M.
  • So, like will the first computers with the new chip have linux by default? That's a change for once...
  • From PAia's Theremax FAQ [http]:
    The cost of this super low end is that you have to tweak the Pitch Trim control from time to time. If any more than this is required (adjusting the coils, for instance) then something is wrong and the unit is not working up to spec. Maybe the Zener is whacked. Maybe
    NPO capacitors were not used for C8 and C10, which are the capacitors in the LC Tank circuits.
    (emphasis mine).
    The point is that NPO Capacitors, manufatured soley by NPO Technologies, employers of the famed, late Jon. E. Erikson, are absolutely necessary for the delicate tuning of a theremin. Without this level of pitch control, theremins absolutely will not be able to be taken seriously in the symphonies funded by the secret societies... leaving antithereminist freemasons as the obvious suspects in Erikson's cowardly midnight shooting as he left "Sylvester's Spa and Bath House."
  • by Kiss the Blade ( 238661 ) on Friday December 22, 2000 @10:36AM (#543265) Journal
    ...the Itanium won't be shipping in quantity until autumn 2001 anyway, and most IT managers will not consider using unproven technology until that date anyway.

    However, having said that, the on the proven technology front it can only be good for Linux to be ahead - as then it gets the label of being proven sooner than MS Windows, which will be uppermost in IT managers minds.

    If you ask me, the battle on this front will not be decide next year, but the year after, when the Itanium is expected to start pushing into the mainstream server market.

    KTB:Lover, Poet, Artiste, Aesthete, Programmer.

  • Wait, doesn't RedHat distribute a "Personal" and "Server" version of the same fucking product with a couple config tweaks and a couple extra deamons?
  • Similarly, you can't run Windows 98 on a PowerPC or Alpha. Does it matter?

    Yup, because the only OS you can run on all that crap is Linux.

    It remains to be seen if the Itanium is really where personal computing is headed. (...) Moving to an entirely new processor *family*, not just the next generation of what's currently available, is not to be taken lightly.

    I believe the future is totally different, precisely because of technologies like Java, Amiga's VM, .NET, emulation and also because of Linux.

    The best architecture for a single task will be chosen and software shall be ported for that. The choice won't be tied to software compatibility or portability issues because these will be minimal.

    One can understand exactly what I mean when one runs Linux in different architectures in the same network. All these machines behave exactly alike and you could only tell them apart because of some specific compatibility issues that still exist.

    The AmigaOS promises to remove architectural barriers that still exist, mainly regarding 3D graphics and high quality sound playback. One could have a DirectX-like API on every main architecture and porting software would be to compile it, providing there weren't any architecture-specific optimizations.

    I don't think the Itanium is out of the league just because it doesn't follow the x86 hierarchy. That will probably be its biggest strength.

    We should be discussing what the Itanium will actually be used for and how it will sell (oh, God, I feel like a marketing/business person) considering its powerful (and expensive) capabilities, but not compatibility issues.

    Flavio
  • There's one more reason that this might not be a problem for Microsoft: When has Intel been able to ship a stable (in the serversense of the word) processor on deadline? On the other hand when has Microsoft been able to?
  • Wrong... The 64 bit version of Whistler for the itanium chip is available for MSDN Universal subscribers to download NOW!

    Intel and Microsoft have developed the itanium chip and whistler hand in hand... Microsoft have had terminal servers running whistler since early this year available for coders to test samples of their code to help ready themselves for 64bit windows.

    This article isn't based on anything other than large amounts of ignorance.
  • I know there is Windows2000 out and all. but when I was forced to admin a NT network at my last job, NT was the bigest pain in the ass to install. I can probally count on one hand the number of first try succesfull installs. Also your 78 year old father will most likely not intall Windows either. That is why Windows comes preinstalled.
  • I wonder if this is at all a negative for the Wintel corporate structure...

    Just think, Linux boxes will be running Itanium right when it comes out. But, big surprise, Itanium turns out to be a little buggy. By the time Windows supports it, the bugs have been ironed out.

    Bravo for Windows! Suddenly running more stable than Linux!
  • And what's more.. If Linux has been running in the 'real world' on Itaniums for 6 months before quantity shipments, it means we'll have ironed out a lot of the bugs that come from point-oh releases :)

    OTOH, don't expect to see a Service Pack for any initial Itanium issues in the MS release till some time in 2002.
  • With all of Intel's delays in getting chips to market, I wouldn't be surprised if they waited until their good buddies in Redmond had something for Itanium before shipping, and blamed it on "production delays" or something else...

    __________________________________________________ ___

  • ...almost anything.

    If RH et al play their cards right, this could be a massive PR coup.

    Alternatively, MS could use the information to locate potential problems, and use that for their OWN PR coup.

    But the bottom line is thus - whoever gets to market first has a strategic advantage. And the longer they hold that, the more defendable it becomes.

    The itanium is a dead processor already, but it's strategic value, in terms of mindshare, is very significant.

  • Uh.... how incompatible.... in so mach as it runs any x86 code, back so far as the 8086 back in the 70's?

    Really? So the Itanium isn't some new-fangled CPU that breaks with the past after all? It can run x86 code as is, natively, without emulation? I had no idea.

    IA64 is definitely not intended for the personal computing market - at least not for a long time yet. This is a server processor

    Ah, that brings back memories. Every new processor is always "initially targeted at the server market" going all the way back to the 386 at least. Makes me smile every time I see it!
  • LOL!

    I tried to set up a Windows NT gateway for someone and tried for HOURS to get two network cards working (a 3com, and a generic ne2k). After countless! reboots, I gave up, installed a copy of 98, and came back the next day and installed linux. Both 98 and Linux worked fine w/ the two network cards, but NT, a ``professional, workstation, server, etc'' OS would not use both cards (I tried switching the slots, everything, to no avail)


    He who knows not, and knows he knows not is a wise man
  • That's a bit optimistic. Try Spring 2002. Trust me.
  • I read a while back that this was the plan, though it may have changed, this was a couple years ago:

    IA-64 chips were to have a hardware emulation unit, not unlike how PPC chips run 68k code. This "option" would be the most expensive, and was projected to be closest to native speed, that is, 1 GHz Itanium would run IA-32 instructions with an approximate performance equivalence of 800 MHz Pentium (XXXX).
    Then, Intel would segment the market, and provide a "lower grade" of Itanium, with the hardware emulation disabled (not missing), and emulation would be done through some kind of software, which I think was limited to Win32. Performance with Win32 code would be about half speed. But this chip would be cheaper.
    The cheapest option would be a chip that couldn't run either the slower emulation, or the hardware emulator (I thought it would be neat if hackers figured out how to activate the hardware emulator, this was the golden age of Celeron, after all). None of these CPUs was expected to ever to turn cycles on a desktop machine.

    PC manufacturers were worried, because at this time, it looked like MS was WAY behind schedule with an IA-64 version of NT, and Intel would be shipping chips and the only OS that would run on it would be NT, in slower emulation, but only if you paid through the nose.
    It wasn't too long after that announcement that every friggin OS vendor came out of the woodwork in support of Itanium. I think IBM was the only vendor who had not pledged support. And Apple :). And IBM did chage their tune later (thanks Mr. Jobs).
    Then, Intel came out and admitted that Itanium was WAY behind schedule, and things really looked bad, looked like Intel was on the verge of dumping IA-64 altogether. This gave Microsoft some breathing room, and apparently they caught up. Microsoft is good at catching up, arent' they?

    Since then, I haven't heard any mention of a software emulator - or one that ran only Win32.
  • Itanium would surely be rolled out targeting the Advanced Server market and thats where MS wanted to be. And not just MS, add Sun and several others to the list too.

    If Linux could just target the server market and ignore everything else, with the new distros running on Itanium Servers would mean a lot to firms who would look for solid servers.

    One up for Tux
    Microsoft - sorry guys, do better next time!.
  • Linux supports every damn appliances on the market. You name it, if it has a processor, Linux runs on it. This is news only insofar as the first Itanium boxes MUST have Linux to run, but you know being the first to the market does NOT mean you will keep that market. Just ask Woz [woz.org] how powerful the company he co-founded became [apple.com].
  • by Junks Jerzey ( 54586 ) on Friday December 22, 2000 @10:40AM (#543294)
    ...because there's not exactly going to be an immediate, huge demand for a greatly overpriced, unproven processor that's incompatible with just about everything that's been built up in the PC-clone era of the last 19 years. Similarly, you can't run Windows 98 on a PowerPC or Alpha. Does it matter?

    It remains to be seen if the Itanium is really where personal computing is headed. After all, Intel has introduced other non-x86 processors in the past and had very high hopes for them. The RISC i860 processor, introduced in 1989, is a good example. The successor, the i960 is still available. But these chips are outside the x86 realm, and there's reason to think that the Itanium could be as well. Moving to an entirely new processor *family*, not just the next generation of what's currently available, is not to be taken lightly. This is doubly true when the benefits of such a change are not at all obvious.
  • But then again, you could say that when the Windows technology comes around for Itanium it will still be untested, while Linux will already have a tested track record on the chip. Most decision makes I deal with see the trust not just with the chip, but with the chip/software combo.

    But then again, they are informed. You were talking about IT managers, right? Nevermind. :)

  • You are right these are not going to go into production boxen on anything like the day they are shipped. In fact last I looked Intel was touting it as a development platform with the next chip in line being for production boxen. But *many* companies will be testing with these and as things stand now most of the early testing will be on one OS therefore when they do go into production Linux will be better tested and will have been being used for x months by those who are going to put it into production boxen. This is a good thing.
  • Wintel has never been "alive" so to speak. When I worked there, we took great pains to clarify that "Intel actively works with all developers" ... so whether that be Microsoft, or Linux, there are engineers at Intel that will work to ensure the next generation chips will have software for them. After all, what good is hardware if you can't do anything with it?

    Additionally, it was made clear to everyone that we were not to use the word "wintel". In fact, at the employee orientation, we were shown a video clip of how <b>not</b> to talk about competitors--a clip of Bill Gates.

    Unlike what many people think, my feeling was that there was actually a rather large animosity against Microsoft; many of the developers felt that Microsoft was unfairly bullying them around.


    yours,
  • by barracg8 ( 61682 ) on Friday December 22, 2000 @11:14AM (#543304)
    When wasn't the first release of any Intel processor greatly overpriced?
    Anyways...
    • unproven processor that's incompatible with just about everything that's been built up in the PC-clone era of the last 19 years.
    Uh.... how incompatible.... in so mach as it runs any x86 code, back so far as the 8086 back in the 70's? hmmm....
    • It remains to be seen if the Itanium is really where personal computing is headed.
    IA64 is definitely not intended for the personal computing market - at least not for a long time yet. This is a server processor, and Intel already have SGI & Sun lined up with designs based around the IA64. MIPS & Sparc may not be dead yet, but there is more movement from Intel's most direct competitors, towards this product, than any other processor they have previously released.
    • Moving to an entirely new processor *family*, not just the next generation of what's currently available, is not to be taken lightly.
    The IA64 architecture is a rare example of an easy transition between ISAs. The processor supports both the new VLIW/EPIC instruction set and the IA32 instruction set. For the OS writers, they can have a mix of 32 & 64 bit code running on the machine, to the extent that a 64 program can have its system calls serviced by 32 bit exception handlers, and vice-versa. So far as application software, a user can run new 64 bit application software alongside legacy applications that they cannot port to 64 bit. There are few easier ways to escape from the headache of the x86 instruction set.
    • This is doubly true when the benefits of such a change are not at all obvious.
    In the server market, the 32bit address space is already becoming a problem. (You can buy yourself a linux box with 4gb ram today).

    (Score:5, Informative) for the parent post? In reality it is probably a subtle Troll.

    cheers,
    G

  • I have had Windows2000 for several months now and have yet to be able to install it on either my laptop or my desktop. It always tries to load loads of drivers for hardware I don't have and then stops, requiring a reboot. On my dad's PC it installed first time without any hassles and my PCs are newer than his. Contrast that with Linux where I have never had a problem installing RH5.2, 6.0, 6.1, Mandrake 7.0, 7.1 and SuSE 6.4. I've been told it's Compaq's fault that I can't install 2000, but then that's the MS way; blame everyone but themselves for the problem.
  • my bad on the 68k emu.

    Spank me Santa.
  • What the fuck? An honest opinion is a "troll" even if it's the fucking truth? I don't personally know for certain, but I've heard over and over that C++ support in GCC IS kinda rough. Why wouldn't an M$ product be better than a free one? Afterall, they have millions of dollars and thousands of people to throw at the problem. Some of these moderators are fools. (Go ahead and mod me down too...I give no fuck.)
  • by barracg8 ( 61682 ) on Friday December 22, 2000 @04:56PM (#543316)
    • Considering word size is different, there's obviously some sort of emulation going on.

    Fair assumption, but not really true.

    In short - your pentium 2/3 is internally a VLIW processor. it contains multiple RISC execution units running in parallel. It has a CISC decoder sat in front of this, to decode x86 instructions into internal mircocode that runs on these units. The itanium executes x86 code in exactly the same way. It just also has the ability to run VLIW instructions directly - skipping the decode stage.

    As for word size - is the x86 (x >= 3) emulating a 286 when it runs 16 bit code? Not really - it just only uses 16 of the 32 bits available to it.

    • Linux/MS would not NEED to release a new version for it, would they?

    OS developers have two options:
    1. do a full port - more work but the who OS is compiled into 64bit code, make use of the larger memory space, etc.
    2. do a partial port - just write code to load 64 bit binaries, switch into 64 bit mode before executing, set up task gates to switch the system back into 32 bit mode when the process calls the operating system (can leave the exception handling routines as 32 bit code).

    The other reason that a port may be necessary is that although the processor may support the IA32 instruction set, there is nothing to say that the rest of the system architecture (e.g. motherboard, busses, bios) may not be backwards compatible with legacy systems. This would be invisible to user space programs, but OS developers would have to deal with the new architecture.

    cheers,
    G

  • I've done it on a Dell with dual-CPU's, embedded Adaptec SCSI, and Dell's PowerEdge RAID controllers.

    Piece-of-cake.
  • Why does MS insist on this lunacy? How different are the codebases? Or is it just a bunch of registry settings that change the versions as in NT 4.0 (according to O'Rielly at least)?

    Why do they make such a big stink over something that should be a Radio Box at install time?

    The only reason is $$$ that I can think of. Kinda like the IDE cards that sold for $60, but you put a resistor on them, and they were the companies $200 IDE raid card.

  • We have seen these kinds of stories over and over again on just about every Linux friendly site out there. I love Linux, and I have the utmost respect and admiration for all of the people involved with this, especially intel for taking such a pro-linux stance the whole way.

    But I need to say something to all of the web admins/authors/.etc out there.

    STOP GLOATING YOU ASSHOLES!!!

    Story titles like "Liunux leads MS in ..." do nothing but make us look just as arrogant as Microsoft marketing. Stop talking shit about Windows, and just spread the good word. People don't need to be told that Windows sucks, they all learn it eventually.
  • Linux distros typically do not install perfectly everywhere. Even Mandrake, which I think is the best, cannot use my external CDRW.

    Linux distros can ship a product faster because if it installs properly 95 percent of the time that is good enough. 95 percent may not be good enough for Windows, though.

    So MS would wisely wait a few weeks or months to make sure their OS installs 99+ percent of the time. Linux users are a forgiving bunch with respect to features that don't install or work. I installed a half dozen Linux's until finally Mandrake 7.2 worked with my ATI graphics chipset.

    My 78 year old father is not so forgiving. If a piece of electronics doesn't work right out of the box he returns it and buys something else. That is the target market for Windows. (Maybe not quite, but I think that Windows aspires to more hardware compatibility than Linux does.)

What is research but a blind date with knowledge? -- Will Harvey

Working...