Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Red Hat Software Businesses

Red Hat Reveals Support For AMD's Hammer 170

Anonymous Coward writes "Red Hat had been rumored to be working on support AMD's Hammer architecture, and now they have made it official. Now if I can get a hold of one of these my little site will finally be able to handle a good slashdotting with 16GB of DDR333! 'Red Hat will provide native 64-bit support for processors based on AMD's x86-64 technology, while providing support for existing 32-bit Linux-based applications.'" Combine this with Linus' feelings and Hammer is looking better and better.
This discussion has been archived. No new comments can be posted.

Red Hat Reveals Support For AMD's Hammer

Comments Filter:
  • by Anonymous Coward

    I can't help but feel that "real"
    manager's will just say Itanium plus winXP
    despite
    the advantages of Hammer and RedHat
    • Perhaps not (Score:3, Interesting)

      by thasmudyan ( 460603 )
      I can't help but feel that "real" manager's will just say Itanium plus winXP despite the advantages of Hammer and RedHat

      AMD has almost constantly succeeded to deliver technically better hardware for a lower price. Given the current economic downturn (blabla) and the lessons learned in the dotcom meltdown (e.g. that image is not everything) even your average modestly intelligent manager type will perhaps chose the cheaper, better product. Besides, I don't think AMD is still viewed as that kind of an underdog anymore! And Linux on the server front looks good, too. So I think the chances are good.

      Another issue is of course whether an 64+ bit addressing architecture is needed for mainstream PCs yet. But as we all know: it's not whether you need it - it's whether the industry thinks you need it!
      • Another issue is of course whether an 64+ bit addressing architecture is needed for mainstream PCs yet.

        I don't know about everyone else, but my applications including in memory databases, can eat as much RAM is they can be given, were all ready running 16 dual processor 3GB Ram linux boxen, and those 16GB NewSys boxes look spot on to me.
        • Wow, I must be a little behind that sort of mainstream! Maybe I'm to poor or something. And my company too. I mean for workstations we still have a lot of single processor 1GHz, 256MB boxen (Servers are different of course). And we do lots of programming and graphics stuff, still the hardware is all in all sufficient. Well depends on what you have to do with them. Speaking of that: spiro_killglance, what do you do with your killer boxen?
          • In memory indexes for search engines, databases etc. Servers need lots of memory anyway,
            but by keeping data normally stored on data in
            memory you can boost your speed by up to a
            hundred fold. As for large memory for graphics or
            CAD work, yeah maybe it isn't needed yet, but
            see here [aceshardware.com]
  • Linus wants hammer to get popular so intel gets off thier asses. he was not talking about widespread adoption of hammer within the linux kernal. he was talking about hammer paving the way for other 64 but processors.
  • A shock? (Score:2, Insightful)

    Is this really a surprise? It seems only natural for Red Hat to support as much hardware as they can. While im not suggesting a Red Hat for toasters any time soon, supporting Hammer seems logical to me. Or am I missing something completely?
    • Re:A shock? (Score:2, Insightful)

      by klaus_g ( 99169 )
      afaik redhat earns money by selling server software (and support i asume).
      so it makes sense. but only if amds hammer actually 'hits' the market as planed.

      supporting an additional hardware platform costs money so they most definitely thought about it before commiting some menhours to the porting/testing
  • by NicolaiBSD ( 460297 ) <`spam' `at' `vandersmagt.nl'> on Tuesday August 13, 2002 @04:34AM (#4060252) Homepage
    Doh! Linus doesn't have warm fuzzy feelings towards the Hammer, or rather he's never expressed them. The poster is referring to a post on LKML on paging issues with Itanium.
    Linus didn't endorse one platform or the other, he only explained that if Hammer was to become dominant instead of Itanium, it would save the kernel developers problems solving the Itanium paging problems.
    • by cperciva ( 102828 ) on Tuesday August 13, 2002 @04:44AM (#4060269) Homepage
      Linus didn't endorse one platform or the other, he only explained that if Hammer was to become dominant instead of Itanium, it would save the kernel developers problems solving the Itanium paging problems.

      I think you mean linux's paging problems. Specifically, the fact that gcc being broken means that linux uses 32 bits for fields which should be 64 bits.
      • There seems to be some confusion here.

        Yes, gcc is not as good as it should be in its support of 64-bit integers on 32-bit platforms, but Itanium is a 64-bit platform, so none of that applies. Also, perfectionist that he is, Linus often tosses around words like "broken" where others would say "suboptimal".

        Here are some of GCC's issues with 64-bit ints on 32-bit platforms:

        • The register allocator wants to use specific register pairs for a given 64-bit object. This hurts on register-poor platforms like ix86, it is a much smaller problem on RISC platforms.
        • Some 64-bit operations will generate library calls (for example, 64-bit multiplication). The kernel folks don't like this.
        • Older GCC's have had some other code generation problems, and even though these are largely fixed, the kernel hackers want to support older GCC's as well as the latest, inhibiting them from using 64-bit ints heavily.
    • Itanium has nothing to do with it. The problem is 32bit platforms. Itanium is 64bit. This is Hammer vs. 32bit X86, not Hammer vs Itanium.
    • I just had a discussion with a guy I know that works on Intel about this. His argument was, "Intel won, AMD should stop being in the consumer market soon." I told him, "Intel is just a giant, but AMD will beat them very soon if I see things right."

      I explained to him the problems with Itanium and development (Problematic BSD and Linux support, which is still a huge market on the server line) even with Windows. AMD Hammer solved a lot of the hardware based problems (Granted a lot of work still needs to be done). His argument was, "AMD just rips of Intel's designs and puts their name on them."

      Makes me wonder what the Intel PR team tells their employees..
      • It doesn't matter what their PR department says. An Intel employee who doesn't believe in Intel is an idiot. Can you imagine a Linux advocate working for Microsoft? What would be the point?
        • It doesn't matter what their PR department says. An Intel employee who doesn't believe in Intel is an idiot. Can you imagine a Linux advocate working for Microsoft? What would be the point?

          What is more stupid is denying your companies competitor is a valid competitor.
  • Now, the obvious question when dealing with a corp and linux hardware support. Will they put an effort into coding it (the compiler in this case), or will they wait for the lusers to finish coding it and then take the credit?

    Seriously, is Redhat good about this? I know some hardware manufacturers are like this about "thier" drivers.

    *sniff* *sniff* *sniff* somethings burning. Oh, its just my karma...
    • Re:Now... (Score:2, Informative)

      Will they put an effort into coding it (the compiler in this case), or will they wait for the lusers to finish coding it and then take the credit?
      Seriously, is Redhat good about this?


      Redhat was one of the first corporations as far as I know to subsidizing kernel development, ie....Alan Cox was collecting a check for his efforts. Red Hat is a very productive member of the open source community IMHO.
    • Red Hat bought Cygnus some time ago. Cygnus is a company that specializes in porting gcc to new platforms, among other things. Red Hat is likely to be one of several major forces behind bringing the GNU toolchain to Hammer.

      -Paul Komarek
  • by GodlikeDoglike ( 600594 ) on Tuesday August 13, 2002 @04:36AM (#4060255)
    Now if I can get a hold of one of these my little site will finally be able to handle a good slashdotting with 16GB of DDR333!

    Just as long as you lose that 14.4 modem you are hosting off.

    Replacing your commodore 64 is just a start ;)
    • Replacing your commodore 64 is just a start

      That's all some people have got - they couldn't fit a bigger computer under the chicken coop, so the Taliban will have confiscated them.

    • You want a hardware upgrade?
      I just bought an exercise bike and mounted my wireless keyboard and trackball on it.

      I'm finally gonna be able to exercise and compute at the same time without having to buy a wearable computer.( 8km so far ;)
    • Yeah, I was thinking about using this thing as a replacement for my desktop pc. Then it dawned upon me that it didn't have an AGP slot.

      Oh well...
  • IA-64 anyone? (Score:4, Interesting)

    by Critical_ ( 25211 ) on Tuesday August 13, 2002 @04:51AM (#4060275) Homepage
    Moderate this as a flame if you want cause I am sick and tired of this x86-64 bullcrap.

    Lets see, the history of slashdot and most of computer-geekdom has always ribbed Intel for maintaining backwards compatibility with processors more than a decade old. Sure, x86 is great due to all the applications out for it, but in all honesty why can't we move away from it?

    With Slashdot, Linus and most of the online review sites pushing for x86-64, one has to wonder if AMD is slipping cash under the table to all these parties. If not, then what happened to those people who wanted innovation in the releam of processors and just not cheap hacks upon hacks upon hacks? It's kinda funny but the way AMD is going is sorta the way Microsoft is: maintain backward compatibility at all costs.

    My guess is that most people pushing x86-64 have yet to write a program more complicated than "hello world!". Let's stick to our desire for innovation and truely stand behind the company willing to shed the baggage: Intel.
    • Yes we want to move away from old legacy architectures, but at the same time we don't want to pay thousands of dollars for a chip that has poor performance running current code, and even native code isn't all that hot (Itanium). IA64 is hardly a proven technology, and it remains to be seen whether the approach they've taken will pay off. We also like to see competition between companies. Hardly anyone really has any loyalty toward either AMD or Intel, but we all like to see low prices and fast chips, and AMD coming out with a 64 bit chip that's affordable is still cool, even though it's still a lot of the same old architecture.
    • Linus and most of the online review sites pushing for x86-64

      My guess is that most people pushing x86-64 have yet to write a program more complicated than "hello world!"

      I know at this point Linus isn't THE author of the entire kernel, but I think his contribution to the Linux kernel was a little more complex than "Hello World!"
    • Re:IA-64 anyone? (Score:3, Insightful)

      by div_2n ( 525075 )
      For good or for ill, backwards compatibility is usually necessary in order to insure rapid acceptance and usage.

      Imagine you are a software company. "We have to retrain all our programmers, buy new compilers AND ditch our old codebase? Can we still write for the old stuff for now? Good . . ."

      I would bet dollars to doughnuts that if DVD players were incapable of playing CD's, there would be quite a few unhappy campers. The relation is the same -- slowly phase out the old while promoting the new.
      • Some observations:

        - gcc is free - granted, probably not very optimised
        - you don't really need to retrain all your programmers to port the apps, unless you write low-level code

    • Re:IA-64 anyone? (Score:5, Insightful)

      by rchatterjee ( 211000 ) on Tuesday August 13, 2002 @06:17AM (#4060419) Homepage
      Let's stick to our desire for innovation and truely stand behind the company willing to shed the baggage: Intel.
      So where were you when the 64bit Alpha came out ten years ago?

      It had nothing to do with x86 compatability and whooped on all x86 chips on every benchmark that was out there but its all but dead now.

      And now you're saying a architecture that doesn't beat the current crop of x86 chips in performence, breaks compatabilty with the x86 architecture, and costs 10 times as much for similar capabilities will somehow succeed?

      Once you break compatability with vast amount of software that is out there for x86 you're suddenly no better than all the other 64bit chips that have been out there.

      Why go with a relatively untested IA-64 arch when i could go with a Sun, IBM, or SGI box who have all been 64bit for years and have no x86 baggage at all? I'm certanly not saving any money going with Intel's chip plus the other 64bit architectures have much more software support in compairason to IA-64.

      As a customer if i buy IA-64 and it fails in the marketplace and support dries up, I'm left with a fairly useless box that can only run the few programs made specifically for IA-64 but if i buy x86-64 and it fails in the market i still have a very usable x86 machine and tons of 32bit software to work with.
      • Why go with a relatively untested IA-64 arch when i could go with a Sun, IBM, or SGI box who have all been 64bit for years and have no x86 baggage at all? I'm certanly not saving any money going with Intel's chip plus the other 64bit architectures have much more software support in compairason to IA-64.

        We all know that 64bit is going to replace 32bit. AMD and Intel are important because huge volumes and low costs are what will finally make 64bit machines ubiquitous, i.e. aunt Edna will be able to buy one at Walmart for a couple of hundred bucks. Like it or not one of these architectures will be the "new x86" and nearly all software will be written for it, displacing 32bit machines as well as all the 64bit niche architectures on the market now.

        As for Sparc, Alpha, etc. being "better": Since when was the best solution guaranteed victory?
      • There was a point in time several years ago (1996, IIRC) where Alpha-based PCs seem poised to break into the market.

        Digital's FX!32 was a wonderful product, translating Intel Win32 API calls into Alpha ones on-the-fly with caching, that translated code runs at an estimated 70% native speed. Considering Alpha CPUs had a large performance advantage on Intel CPUs at that time (Pentium Pro just came out), the future of Alphas look very promising.

        In the end, it is probably a combination of higher cost, lesser mindshare and lack of 64-bit killer apps that did them in. And the Compaq buy-out of Digital, of course.

        I'm personally hoping that the Hammer competition forces Intel to start releasing IA-64 CPUs targeted at the workstation/power desktop market. Having had to program in assembly for CISC x86-like CPUs (Z80/Z180) any clean alternative is preferable.
      • Ten years ago I was unable to afford 2**32 bits of memory let alone 2**64. Heck, I was excited to see the Buck & Meg ads in Computer Shopper (300 meg HDD for $300.)

        Now is the time for 64-bit machines. The early innovator is often not the winner in the market because they are innovating before the need exists. Right now, when the world "needs" 64-bit machines (in the way we "need" X-box, HDTV and USB2), is what will determine the winners.

        As for AMD vs IA64, AMD will take the Aunt Edna market because of its fast procs and cheap prices. IA64 will take the serious business server market because of its superior proc design for handling *huge* apps like MS SQL. People buying IA64's probably don't care if they can play Death Match III when the IA64 market dries up.
        • Heck, I was excited to see the Buck & Meg ads in Computer Shopper (300 meg HDD for $300.)

          You whippersnapper! I remember when a 30 meg drive could cost $3,000.
      • I think another reason why Itanium CPU's haven't been accepted is their stratospheric prices.

        When the price of the Itanium 2 CPU is somewhere between US$1,000 and US$3,000, no wonder why there's not much interest nowadays. My guess is that AMD's X86-compatible chips using the Hammer core design will probably be at most US$550 to US$600 in price for the fastest versions.
    • I don't why everyone is talking like IA-64 and
      x86-64 are the only 64-bit games in town.

      If you want the best performance money can buy
      get a IBM power-4 system.

      If you want tried and tested reliablity get
      a SUN ultrasparc system.

      If you want the most bang for the buck get an
      Operton system, (AMD Hammer systems are still
      going to be cheaper than pentium and xeon systems). (Of course you'll have to wait 5+ months, still for that)

      And Itanium-2, err, i can't actually think of
      business or server application that would be
      best on an Itanium-2, compared to the others at
      this point. One the plus side Itanium-2 has
      got good FP perfomance but not as good as
      Power-4 or the latest Alphas, but its real world Integer performance is still to weak for servers.
    • You realize that shedding the baggage means a massive workload on lots of volunteers don't you?

      Maybe not all developers are that keen on doing lots of tedious and possibly not that exciting work?

      This was probably Linus' real message. He is not really that excited about the extra work. If you want to do it, then he probably doesn't care that much about what technology wins.

      Gaute
    • Let's stick to our desire for innovation and truely stand behind the company willing to shed the baggage: Intel.

      Or rather trade it against even more ugly baggage called "EPIC". Explicit parallelism is soooo '80. Putting all the intelligence in the compiler and still requiring enourmous amounts of silicon isn't really that great.

      Now I'm all for a neater instruction set, but IMHO "EPIC" is all but neat. It's having software developers (compiler backend writers) do all the work. Since I'm a software developer (albeit no compilers) I don't find this idea good.

      Oh, and the x86 is actually more than two decades old, not just one, and I agree it's quite ugly and should be replaced by something more beautiful - emphasis on beautiful.

      My guess is that most people pushing x86-64 have yet to write a program more complicated than "hello world!".

      That's worth a +1, Funny.
    • I'm with you 100%. But I don't understand how this translates into support for IA-64. What it *does* translate into is an aching desire for Alpha to make a comeback. Or for Power 4 to become cheaply available. Because--to borrow an old campaign slogan--"It's about the floating point performance, stupid."
    • Linus wasnt criticising Itanium, nor siding with Hammer against Itanium. The whole topic was x86-32 vs x86-64. IA-64 didnt figure into it.

      IA-64 wont get into consumer level equipment for a long long time yet. Intel isnt marketing it that way; they're pushing it as a replacement for Sparc and Alpha, probably wanting to create a new high-margin segment in the industry.

      AMD's aspirations appear to be more consumer level with the x86-64, and as a fast way to get 64-bit it might be an alternative. It sure as hell beats extending memory in other ways on a 32 bit architecture.
    • Lets see, the history of slashdot and most of computer-geekdom has always ribbed Intel for maintaining backwards compatibility with processors more than a decade old.

      They're not backward compatible with processors a decade old. They're backward compatible with a processor two years old; which itself was backward compatible with the one two years before that; etc.

      Sure, x86 is great due to all the applications out for it, but in all honesty why can't we move away from it?

      You just answered your own question.

  • by Roger Whittaker ( 134499 ) on Tuesday August 13, 2002 @04:57AM (#4060279) Homepage
    Press releases detailing SuSE's work on Linux for the Hammer can be found here (20th March 2002) [suse.com], here (28th February 2002) [suse.com] and here (31st January 2001) [suse.com].

    Roger Whittaker (SuSE Linux Ltd)
  • I don't know what Intel was smoking but you can't just expect the market to completely adopt a New cpu standard such as Itanium, who's going to buy it without a large set of code that will run on it?

    This is why the Hammer will come out first, Amd is smart enough to realize that the first few years of hammer, people will need to run some things that there are no 64-bit programs for yet.

    Mad props to Amd for not having stuck their heads where the sun doesn't shine this time around!
    • Well, there's only one problem with your assertion... the Itanium already is out, and the Itanium 2 is close to release. OEMs are already building Itanium 2 boxes.

      And for that matter, those Itanium 2 boxes are fast. On the SPEC CPU2000 benchmarks, the two fastest boxes are 1ghz I2s, and the next six spots are held by boxes running POWER4s (all running at >1ghz), Alphas, and a coulpe of SGIs. And there are a large number of vendors who have already committed to creating IA64 versions of their software from Microsoft to Oracle. Pretty much all of the big names have signed on.

      Is anybody even planning on selling a server with Hammers yet? Has AMD even given anybody any silicon to play with? Intel was giving out development samples of the Itanium over two years ago. Intel might not have the reputation or experience of Sun or IBM with high-end servers, but they've certainly got more than AMD who have never had a successfull server line before. It's obvious you're a fan of AMD, but don't let your biases get in the way of reality.


      • Actually AMD has managed to take 10% of the low end server market with their MP chips.

        Its not much of a foot in the door but its something that should allow them to push Hammer into the market.

        Besides if all of your 32 bit code still works on the Hammer (and runs fast), along side 64 bit code, then what's to prevent this chip from making a good entry into this market?
      • > On the SPEC CPU2000 benchmarks, the two fastest
        > boxes are 1ghz I2s

        That's only true because of Itanium 2's floating-point performance. Real server workloads don't use floating point. For a slightly more realistic workload, look at the SPEC CINT numbers. There, the 1GHz Itanium 2 falls behind 2.4GHz Pentium 4s and Xeons.

        Furthermore, none of these SPEC bencharks are nearly as memory-intensive as real server workloads. That's where Itanium really gets Hammered.
  • by shic ( 309152 ) on Tuesday August 13, 2002 @05:14AM (#4060302)
    my little site will finally be able to handle a good slashdotting with 16GB of DDR333

    Hmmm. I'm probably more interested than most in the prospects of large address spaces, however I don't imagine typical web sites are where this technology will be best exploited. Think seriously, moving to 8 byte addresses has the following effects:

    1. Massively expanding address space and hence (for the first time - IMHO) making the holy grail of direct manipulation of persistent data structures a realistic proposition.
    2. Expanding the size of today's simple data structures. Consider, for example, a simple bi-directional linked list of 32-bit integers using a forwards and backwards pointer. A 32 bit arch has a 200% overhead, but 64 bit ach has 400% which should somewhat diminish expectations for magical performance!
    Don't get me wrong. I think 64 bit is likely to be at least as important a step as 32 bit was c. 20 years ago, however I don't expect more than a small niche for such systems until resource allocation is re-thought.
    • The question that begs asking is if we actually need 64 bits. While the point of oodles of memory, etc can be made, consider google. Google has oodles of CPU's working together. 64 bits will buy them nothing. And somehow I tend to think this is how computing will go.

      And also consider the fact that quite a few companies have 64 bits, Digital, Sun. And did the world change? Not really...
      • Distributing data on different computer is not easy, in fact, it can be quite hard (at least doing it correct)

        Probably, thats why distributed computing is a discipline for itself.

        > And also consider the fact that quite a few companies have 64 bits, Digital, Sun. And did the world change? Not really...

        1960: And quite some companies have computers, and did the world change? Not really...
        Same with Internet, 3D-graphics, cars...

        There is a great difference between being available and being commodity.

        Nonetheless, you are certainly right that switching from 32bit to 64bit won't revolutionise the world.

      • Google has oodles of CPU's working together. 64 bits will buy them nothing.

        Wrong.
        Googles DB needs RAM. Lot's of it. More than 3GB if possible - with fast access. They don't need a lot of processing power (for their search stuff, I don't know exactly what voodoo they are working on also).
        There are postings to lkml from google programmers which show that.
        If they can get native 64bit adressing on a cheap plattform with just a recompile, they will do it.
      • Would i rather have 64bits or
        8 way SMP with decient
        thread and process management
        and reasonable security in the CPU instruction set.

        mainstream SMP systems will change the world, mainstream 64bits won't(unless they add all of the above).

    • My thoughts were the same when 32bit came in, until I relealised.
      The new insturusction and architecture improvements in 32bit x86 made for a good performance overhead.

      The memory bus was twice wide on a 32bit system , so the pointers on the linked list may have been twice the size but becuase of the wider bus there was no performance hit.

      One of the create benifits of 32bit was that you could have numbers +-64000 in one register, giving the greatest performance increase.

      The extra wide bus is gonig to give some performance gains on 64Bit systems, but I don't see the extra address space or larger numbers being that benifitial.. Well maybe the extra address space will help with threading and process management, and mean that bloatware can be even more bloated.

      • The difference is that the price/GB ratio has changed since then. This means that you can today build a quite cheap machine which hits the 32bit memory barrier.

        If you need it is another question, but anybody who really could profit from big memory has to shell out quite a buck. Hammer (Itanium) will change that picture dramatically.

        If Hammer performs as it sounds, and is not too expensive, it will rapidily enter mainstream, because 64bit is even a better marketing buzzword than XXXXMHz, esp. since Hammer promises both.
        • My system at home has 512MB and never hits swap-space, even with a load of server application running.

          64 bit system might be great for enterprise servers but for the home? There would have to be some major software bloating going on for my home machine to use more than 4gb or so, a typical home user could probably live in 128mb at the moment without any performance problems. the current 4gb limit allows them to have 40times as much bumph hell you can even fit a dvd in 4gb of memory.
          • I have 4GB of RAM in my home machine and I don't hit the swap either. It really didn't cost that much to do it (compaired to the SCA-2 15k RPM SCSI drives). But what makes me made is I did pay for 4GB, but I can only see about 3.6GB because of the PCI address space.

            That is why I can't wait for the dual Opteron boards to come out.
            • The point I was getting at is,
              The amount of data that you can fit into 4gb is so large that people would have to start using 3d data arrays to need any more.

              It works like this....
              A DOS PC has 640k enough for a text document and some vector graphics and a small personal database. At this point intime Bill probably couldn't imagine computers getting fast enough to need more memory.

              A CD(640mb) can hold x books of text (as promoted).

              A DVD(4gb) can hold x books of text, but as scaned images and with full audio.

              Unless you start holding molecular or biological information on you PC or want pointless resolution on the images it will take a while before you fill 4gb. (you probably wouldn't run that kind of stuff on a mainstream PC anyhow)
              That amount of data-requirement is probably 5-10 years off and there will have to be major SMP improvements in PC's for it to be pratical.

              AMD and Intel should be pumping money into SMP developments instead of GHz and bit wars, that's where the future lies.
              • Right now, I don't really need more than 512MB of physical RAM, BUT I do want the simplicity of memory mapping exabyte-scale data in a single process. I want a different kind of software:-)
          • PPPPht! I've got 512mb, and it's nowhere near enough.

            Photoshop chews ram. And god help anybody who does video editing. Video editing could eat 4g very easily.
      • The memory bus was twice wide on a 32bit system , so the pointers on the linked list may have been twice the size but becuase of the wider bus there was no performance hit.

        Of course, chars are still 8 bits on 64-bit systems (and many applications do operate on chars). Also ints are often still 32 bits on 64 bit systems (at least that's the convention used for IA-64, use long for a 64-bit integer), and many applications do operate on ints.

        • you get 8 chars to a 64bit bus,
          I believe (at least in the 16bit 32bit days) a int is a machine word be it 8 16 32 64 bits.
          a long long is 64 bits
          a long (in c) is 32 bits
          a short is 16 bits
          and a char 8 bits.

          etither that or i'm going to have to put a load of #ifdef's in my code!!!

          you should use int's for general paramiters because there faster(generally).
          and the correct data type when needed.

          • I believe (at least in the 16bit 32bit days) a int is a machine word be it 8 16 32 64 bits.

            Well, you shouldn't assume anything about the size of int. For a quick overview of the int sizes, take a look at the gcc/config/arch/arch.h files in gcc and grep for INT_TYPE_SIZE. Many architectures (IA-64, PA-RISC, SPARC, etc.) use 32 bit ints even if the word size is 64 bits.


    • Expanding the size of today's simple data structures. Consider, for example, a simple bi-directional linked list of 32-bit integers using a forwards and backwards pointer. A 32 bit arch has a 200% overhead, but 64 bit ach has 400% which should somewhat diminish expectations for magical performance!


      This is a non-problem - memory is cheap, and if it is not cheap enough to store your linked list of a bazillion ints then you need to change your data structure or algorythms.

      However, the biggest performance disbenifit from going 32->64 bits will cache misses. The major performance bottleneck on CPU intensive apps these days is how often you get a cache miss as opposed to a cache hit; by making things twice as big, you're cache will only hold half as many of them.

      On those platforms which let you choose if you want to be 32 bit or 64 bit app on a per-application basis, I'm using this rule of thumb: "32 bits unless benchmarks prove me wrong, or I need the address space".
      • Firstly, let me clarify - I am not worried about a handful of linked lists, but rather the more general assumption that we can assume no locality of reference when we implement data structures.


        Yes, we can argue that RAM is cheap... but as you eloquently point out, buying more RAM doesn't overcome all of the implications. Other bottlenecks exist, and I can think of several:

        • Level 1 and Level 2 cache (as you suggest)
        • Network packets (when transmitting pointers as tokens, for example.) - this one even has a double whammy if you cause packets to exceed the frame size for your network protocol.
        • Hard Disk IO and cache issues are affected by larger data structures should applications store in native (or near-native) formats.

          • And, I'm sure that there are more:-)
      • I let most of them go, but you've just tipped me over my annoyance limit for today. The word you wanted was your, not you're. You're is short for you are, and the phrase "you are cache will only hold half as many of them" clearly makes no sense. Here endeth the lesson.
    • Well there are a few points where i would see 64bit computing making a difference. First of all the I/O part, using native 64bit data types for your 64bit PCI slots, moving over 64bit wide I/O channels.. this will save you quite a bit when using gigabit network cards and high end I/O controllers such as raid. Also the graphics / 3D market can benifit from this.. first the graphics industry is moving to higher requirements for prescision of colors and coordinates (so using native 64bit numbers for them won't impact performance as much, but allow a much higher prescision), and it will also be able to use the 64bit I/O busses (the first mobo's i've seen for these CPU's don't have 64bit AGP yet, but i am sure it will happen)

      Last, all this creates a nice new tech platform.. 64bit PCI slots (running on 133 or 64 mhz), and DDR333 ram..

      All in all it will make more sence in the beginning to use all this goodness for I/O demanding applications (servers) but i am sure it will break through in the profesional graphics market soon enough, with the consumer market laging behind only a bit.

      Also remeber, linux _needs_ 64bit computing.. while linux wasn't that sensitive to the Y2K problem, the 32bit time value used is gonna run out in the next 30 years.. native 64bit integers would mean you can use 64bits for your seconds since 1/1/1970, so keep linux running for a while longer ;-)
    • 1. Massively expanding address space and hence (for the first time - IMHO) making the holy grail of direct manipulation of persistent data structures a realistic proposition.

      Well, EROS [eros-os.org] does just this on 32 bit systems. (Thankfully), I haven't had to touch EROS much yet, so I don't really know how it handles it, though.

      Of course, given there is no driver for hard drives, etc (and last I heard booting the kernel didn't work on systems with more than 256 megs of memory), the fact that it supports persistent state is not particularly useful. But someday...

      2. Expanding the size of today's simple data structures. Consider, for example, a simple bi-directional linked list of 32-bit integers using a forwards and backwards pointer. A 32 bit arch has a 200% overhead, but 64 bit ach has 400% which should somewhat diminish expectations for magical performance!

      That's just a bad data structure. What you want is to have each node of the linked list have a fixed size array (say 1-16K, depending on local circumstances), and a couple of extra integers telling you where the start and stop of the arrays are. This is much, must faster, and the memory overhead for the extra pointers (be they 32 or 64 bits) is quite small. It's also quite trivial to program.

      Excuse me, the data structure I just described is for queues, not general lists (queues tend to come up more often than list for me so that's what my mind jumped to, I guess). But you see my point, I hope.

      I can't really think of a case where a 400% overhead is too much, but 200% is OK.
    • making the holy grail of direct manipulation of persistent data structures a realistic proposition.
      IIRC, this *was* done in multics.
      Large pointers into small blocks of storage seems wasteful somehow.
    • Don't get me wrong. I think 64 bit is likely to be at least as important a step as 32 bit was c. 20 years ago, however I don't expect more than a small niche for such systems until resource allocation is re-thought.

      Don't forget that the 64-bit data will be coming from a wide memory bus, so there is essentially no extra overhead for getting 64 bits at a time. For many data structures (for instance an array of 32 bit integers) there is no additional overhead.

      However, your basic point is right, just as it was for a doubly linked list of shorts on a 32-bit architecture. Larger pointers, and some level of data bloat (ints will now be 64 bits, for instance) are to be expected.

      So, it is not so much that "resource reallocation must be rethought", it is simply that many applications don't yet need 64-bit power. The immediate adopters will be areas like scientific processing/visualization, CAD/CAM/CAE and large databases (this is the enterprise server role AMD is hoping Opteron nails). CAD users have been hard up against current addressing limits, and will welcome the ability to handle larger models. A little extra bloat is in the noise, especially since the whole point is to address massive amounts of RAM. The SUSE implementation allows 512 GB of virtual address space per process, for instance. Hammer's SMP capabilties and scalable memory architecture are just more icing on the cake.

      "Normal" users can buy Hammer systems and run 32-bit software/OSes just fine (faster than any P4), then upgrade to 64 bits when they need it. People will find ways to use all that power, natural speech interfaces come to mind. Games will probably push the 4 GB barrier sooner than you'd think as well.

      By the way, the claim is that 64-bit code for the Hammer will run 30% faster than the equivalent 32-bit code. This is due to x86-64 having more general purpose registers among other things.

      I think that x86-64 is a brilliant move on the part of AMD, and if Hammer performs as advertised AMD will take major marketshare and profits from Intel. I can't wait to get my hands on a system, myself. :-)

  • After all, it's a very simple matter to recode your source to run on a new architecture, once the preliminary work has been done.

    You can only get anywhere if you have backward compatibility. Whilst Windows software will have to be rewritten for 64-bit execution, much of what exists on Linux should just recompile. AMD's decision to implement backward compatibility means that they will certainly be the choice of the home user, even if they don't make it big in the world of the office.
  • by jukal ( 523582 ) on Tuesday August 13, 2002 @05:28AM (#4060326) Journal
    Remember this, year 1992?
    "Digital Equipment unveils the 150-MHz Alpha 21064 64-bit microprocessor". That was kind of one checkpoint, this year, I believe might be another.
  • Ok, I'll probably buy AMD but
    If Intel wins then Windows is stuffed,
    Because most of the Linux software is open source so I can recompile for Intel, on the other hand most of the Windows software I use is closed source.

  • Stating that they are supporting a 64-bit CPU should mean that all the code that makes up RedHat properly compiles on this particular 64-bit CPU (i.e. is 64-bit pure) and that the compiler handles the CPU properly. Since we are dealing with open source, all the distributions could put together a 64-bit version easily by simply upgrading to the latest 64-bit friendly source and doing a build.

    What I am wondering about is how much of the code is not 64-bit pure, and who will take care of making it 64-bit pure in time for Hammer to be released. It is a real problem after all.

    Everybody and their cat will support the Hammer when it finally arrives, linux, windows, (MacOS X?), bsd, (solaris??).

    And about 16GB of memory. Either you put 4GB DDR333 SIMMs in your four slots, or you have a mobo with a lot of SIMM slots. Having a larger address range is great, but I wouldn't want it if I don't get more memory as well;) Let's hope the memory prices can fall a little again.
    • Well, most of the distributions already have.

      But it is not just recompiling, there is a lot of work done for the compiler, binutils and the kernel.

      For AMD Hammer most of this has been done by SuSE already, so there is not much work left.

      Ciao, Marcus
    • Most of the 64-bit pure-ness has already been done. So have the endian-ness.
      If the code runs on Sun, Mac OS X, x86 and Alpha then theres a good change it will run on Hammer or IA-64 without any significant changes(if any).

  • by back@slash ( 176564 ) on Tuesday August 13, 2002 @06:03AM (#4060383)
    Anonymous Coward writes: Now if I can get a hold of one of these my little site will finally be able to handle a good slashdotting with 16GB of DDR333!

    Not only that Anonymous Coward, but with the amount of posts you make to /. you need one of those just to keep up!

  • by AppyPappy ( 64817 ) on Tuesday August 13, 2002 @06:45AM (#4060467)
    16 gig???

    I remember when 48K was considered overkill because you couldn't fill it


    I remember when 360k was enough for software and data

    I remember when I got a 20 meg hd for my XT "just in case"

    I remember when I didn't wear these damn Depends. NURSE!!!

  • Has the word laptop case-mod of the year (actually that's more than one word) written all over it for someone with the right state of mind and with a little too much money, or just enough as you can put it.
  • At least given my experience with 64 bit SPARC chips, and the 64 bit Solaris operating system, 64 bits hardly made a difference either way. And I'm not slamming Sun, either.

    IANAKE. (I am not a kernel expert, but this is my understanding of the situation.)

    Sun incrementally worked its way up to 64 bits in the operating system. I believe first they offered 64 bit OS calls, then later moved the OS itself to 64 bits. Solaris 7 was, at least, the most visible transition, when you had a choice of installing a 32 bit OS, or a 64 bit OS.

    What will surprise some people (and be intuitive to others) is that many applications actually ran a bit SLOWER with the OS in 64 bit mode. What? Yup. And for good reason, too.

    The problem was that you had the overhead of a 64 bit operating system to run 32 bit applications. More overhead means less application performance. More work was required to do the same tasks.

    And many applications are hard pressed to take advantage of 64 bit features. Its like putting a hot-rod engine into your daddy's Oldsmobile and keeping the original tranny. But yes, it works.

    Mind you, there are applications which can take some more advantage of 64 bits, and the future in operating systems isn't 32 bits. So it is still good to have an operating system go that direction. It is just that for most people, there isn't a big WOW FACTOR when you go 64.
    • > Its like putting a hot-rod engine into your
      > daddy's Oldsmobile and keeping the original
      > tranny. But yes, it works.

      That would probably be just fine. They had pretty good tranny's back then [musclecarplanet.com]

    • Sun incrementally worked its way up to 64 bits in the operating system. I believe first they offered 64 bit OS calls, then later moved the OS itself to 64 bits. Solaris 7 was, at least, the most visible transition, when you had a choice of installing a 32 bit OS, or a 64 bit OS.

      What will surprise some people (and be intuitive to others) is that many applications actually ran a bit SLOWER with the OS in 64 bit mode. What? Yup. And for good reason, too.

      The problem was that you had the overhead of a 64 bit operating system to run 32 bit applications. More overhead means less application performance. More work was required to do the same tasks.


      I guess that's a bit like running Win9x applications under WinNT/2K/XP - every string in every API call gets converted back and forth between Unicode.
  • by vandan ( 151516 ) on Tuesday August 13, 2002 @08:25AM (#4060876) Homepage
    AMD have already stated their intention [extremetech.com] to make Palladium-ready [geeknewz.com] chips [theregister.co.uk].
    Here's what AMD is really thinking ...
    We'll take advantage of Linux Losers' programming ability now (We could sure use all the help we can get there). And then we'll turn around and dictate the conditions under which these 'customers' can use their computers and provide a big-brother service to keep the ol' boys in the white house happy. It makes no difference to us that
    Palladium [theregister.co.uk] will destroy Linux and Open Source software. There's more money in it for us if they have to upgrade every 18 months to an ever-more-inefficient Microsoft Piece of Shit.

    Come on, people, really. Don't support AMD. They are not the noble David against the nasty Goliath. They are just as much a nasty Goliath themself, except for the fact that they don't have much market share... But they sure are acting like they do. If AMD and Intel keep pushing their 'Trusted Computing' wheelbarrow, I swear I will buy an underpowered Transmetta or even a fucking Macintosh just to avoid Palladium.
    • just because the chip is palladium ready, doesn't mean that the OS will have to use those features.

      i think we should be mad at AMD because they want to make money. let's also forget the effort to make x86-64 accessable to open-source.

      AMD realizes that the vast majority of the processors they sell end up running windows. it really wouldn't make sense for them to make something that would not support future versions of windows. it can still use non-TCPA OS (palladium is just the microsoft version of TCPA). but damn them for wanting to stay in business.

      so you run intel now? that's hardly taking a stand.

      it might also be that people like the features of the upcoming opterons. they can always just load whatever OS they want.

      i realize this is just troll food, but let's try to have fewer meaningless boycotts

...there can be no public or private virtue unless the foundation of action is the practice of truth. - George Jacob Holyoake

Working...