Red Hat Reveals Support For AMD's Hammer 170
Anonymous Coward writes "Red Hat had been rumored to be working on support AMD's Hammer architecture, and now they have made it official. Now if I can get a hold of one of these my little site will finally be able to handle a good slashdotting with 16GB of DDR333! 'Red Hat will provide native 64-bit support for processors based on AMD's x86-64 technology, while providing support for existing 32-bit Linux-based applications.'" Combine this with Linus' feelings and Hammer is looking better and better.
vs Itanium (Score:1)
I can't help but feel that "real"
manager's will just say Itanium plus winXP
despite
the advantages of Hammer and RedHat
Perhaps not (Score:3, Interesting)
AMD has almost constantly succeeded to deliver technically better hardware for a lower price. Given the current economic downturn (blabla) and the lessons learned in the dotcom meltdown (e.g. that image is not everything) even your average modestly intelligent manager type will perhaps chose the cheaper, better product. Besides, I don't think AMD is still viewed as that kind of an underdog anymore! And Linux on the server front looks good, too. So I think the chances are good.
Another issue is of course whether an 64+ bit addressing architecture is needed for mainstream PCs yet. But as we all know: it's not whether you need it - it's whether the industry thinks you need it!
Re:Perhaps not (Score:2)
I don't know about everyone else, but my applications including in memory databases, can eat as much RAM is they can be given, were all ready running 16 dual processor 3GB Ram linux boxen, and those 16GB NewSys boxes look spot on to me.
Re:Perhaps not (Score:1)
Re:Perhaps not (Score:2)
but by keeping data normally stored on data in
memory you can boost your speed by up to a
hundred fold. As for large memory for graphics or
CAD work, yeah maybe it isn't needed yet, but
see here [aceshardware.com]
what does linux have to do with this? (Score:2, Informative)
Re:what does linux have to do with this? (Score:2)
I spent $120 on a 64 bit workstation years ago. I don't know why everyone is still using these crappy Intel boxes. Get yourself at least a Multia. Geez.
Re:what does linux have to do with this? (Score:3, Insightful)
Re:what does linux have to do with this? (Score:2)
Re:what does linux have to do with this? (Score:1)
We're a lot more likely to make PAGE_SIZE bigger, and generally praying that AMD's x86-64 succeeds in the market, forcing Intel to make Yamhill their standard platform. At which point we _could_ make things truly 64 bits
posted here [google.com]
Re:what does linux have to do with this? (Score:1)
Re:what does linux have to do with this? (Score:1)
idiot moderators.
oh well, i guess i will watch my karma go down the tubes
A shock? (Score:2, Insightful)
Re:A shock? (Score:2, Insightful)
so it makes sense. but only if amds hammer actually 'hits' the market as planed.
supporting an additional hardware platform costs money so they most definitely thought about it before commiting some menhours to the porting/testing
Linus's hammer support?! (Score:5, Informative)
Linus didn't endorse one platform or the other, he only explained that if Hammer was to become dominant instead of Itanium, it would save the kernel developers problems solving the Itanium paging problems.
Re:Linus's hammer support?! (Score:5, Informative)
I think you mean linux's paging problems. Specifically, the fact that gcc being broken means that linux uses 32 bits for fields which should be 64 bits.
Re:Linus's hammer support?! (Score:2)
There seems to be some confusion here.
Yes, gcc is not as good as it should be in its support of 64-bit integers on 32-bit platforms, but Itanium is a 64-bit platform, so none of that applies. Also, perfectionist that he is, Linus often tosses around words like "broken" where others would say "suboptimal".
Here are some of GCC's issues with 64-bit ints on 32-bit platforms:
Re:Linus's hammer support?! (Score:2)
Re:Linus's hammer support?! (Score:2)
I explained to him the problems with Itanium and development (Problematic BSD and Linux support, which is still a huge market on the server line) even with Windows. AMD Hammer solved a lot of the hardware based problems (Granted a lot of work still needs to be done). His argument was, "AMD just rips of Intel's designs and puts their name on them."
Makes me wonder what the Intel PR team tells their employees..
Re:Linus's hammer support?! (Score:2)
Re:Linus's hammer support?! (Score:2)
What is more stupid is denying your companies competitor is a valid competitor.
Now... (Score:2)
Seriously, is Redhat good about this? I know some hardware manufacturers are like this about "thier" drivers.
*sniff* *sniff* *sniff* somethings burning. Oh, its just my karma...
Re:Now... (Score:2, Informative)
Seriously, is Redhat good about this?
Redhat was one of the first corporations as far as I know to subsidizing kernel development, ie....Alan Cox was collecting a check for his efforts. Red Hat is a very productive member of the open source community IMHO.
Re:Now... (Score:2)
-Paul Komarek
Hardware upgrade (Score:3, Funny)
Just as long as you lose that 14.4 modem you are hosting off.
Replacing your commodore 64 is just a start
Re:Hardware upgrade (Score:3, Funny)
That's all some people have got - they couldn't fit a bigger computer under the chicken coop, so the Taliban will have confiscated them.
Re:Hardware upgrade (Score:2)
I just bought an exercise bike and mounted my wireless keyboard and trackball on it.
I'm finally gonna be able to exercise and compute at the same time without having to buy a wearable computer.( 8km so far
Re:Hardware upgrade (Score:1)
Oh well...
IA-64 anyone? (Score:4, Interesting)
Lets see, the history of slashdot and most of computer-geekdom has always ribbed Intel for maintaining backwards compatibility with processors more than a decade old. Sure, x86 is great due to all the applications out for it, but in all honesty why can't we move away from it?
With Slashdot, Linus and most of the online review sites pushing for x86-64, one has to wonder if AMD is slipping cash under the table to all these parties. If not, then what happened to those people who wanted innovation in the releam of processors and just not cheap hacks upon hacks upon hacks? It's kinda funny but the way AMD is going is sorta the way Microsoft is: maintain backward compatibility at all costs.
My guess is that most people pushing x86-64 have yet to write a program more complicated than "hello world!". Let's stick to our desire for innovation and truely stand behind the company willing to shed the baggage: Intel.
Re:IA-64 anyone? (Score:1)
Re:IA-64 anyone? (Score:1)
My guess is that most people pushing x86-64 have yet to write a program more complicated than "hello world!"
I know at this point Linus isn't THE author of the entire kernel, but I think his contribution to the Linux kernel was a little more complex than "Hello World!"
Re:IA-64 anyone? (Score:3, Insightful)
Imagine you are a software company. "We have to retrain all our programmers, buy new compilers AND ditch our old codebase? Can we still write for the old stuff for now? Good . .
I would bet dollars to doughnuts that if DVD players were incapable of playing CD's, there would be quite a few unhappy campers. The relation is the same -- slowly phase out the old while promoting the new.
Re:IA-64 anyone? (Score:2)
- gcc is free - granted, probably not very optimised
- you don't really need to retrain all your programmers to port the apps, unless you write low-level code
Re:IA-64 anyone? (Score:5, Insightful)
It had nothing to do with x86 compatability and whooped on all x86 chips on every benchmark that was out there but its all but dead now.
And now you're saying a architecture that doesn't beat the current crop of x86 chips in performence, breaks compatabilty with the x86 architecture, and costs 10 times as much for similar capabilities will somehow succeed?
Once you break compatability with vast amount of software that is out there for x86 you're suddenly no better than all the other 64bit chips that have been out there.
Why go with a relatively untested IA-64 arch when i could go with a Sun, IBM, or SGI box who have all been 64bit for years and have no x86 baggage at all? I'm certanly not saving any money going with Intel's chip plus the other 64bit architectures have much more software support in compairason to IA-64.
As a customer if i buy IA-64 and it fails in the marketplace and support dries up, I'm left with a fairly useless box that can only run the few programs made specifically for IA-64 but if i buy x86-64 and it fails in the market i still have a very usable x86 machine and tons of 32bit software to work with.
it's all about volume (Score:2, Informative)
We all know that 64bit is going to replace 32bit. AMD and Intel are important because huge volumes and low costs are what will finally make 64bit machines ubiquitous, i.e. aunt Edna will be able to buy one at Walmart for a couple of hundred bucks. Like it or not one of these architectures will be the "new x86" and nearly all software will be written for it, displacing 32bit machines as well as all the 64bit niche architectures on the market now.
As for Sparc, Alpha, etc. being "better": Since when was the best solution guaranteed victory?
Re:IA-64 anyone? (Score:2)
Digital's FX!32 was a wonderful product, translating Intel Win32 API calls into Alpha ones on-the-fly with caching, that translated code runs at an estimated 70% native speed. Considering Alpha CPUs had a large performance advantage on Intel CPUs at that time (Pentium Pro just came out), the future of Alphas look very promising.
In the end, it is probably a combination of higher cost, lesser mindshare and lack of 64-bit killer apps that did them in. And the Compaq buy-out of Digital, of course.
I'm personally hoping that the Hammer competition forces Intel to start releasing IA-64 CPUs targeted at the workstation/power desktop market. Having had to program in assembly for CISC x86-like CPUs (Z80/Z180) any clean alternative is preferable.
Re:IA-64 anyone? (Score:1)
Now is the time for 64-bit machines. The early innovator is often not the winner in the market because they are innovating before the need exists. Right now, when the world "needs" 64-bit machines (in the way we "need" X-box, HDTV and USB2), is what will determine the winners.
As for AMD vs IA64, AMD will take the Aunt Edna market because of its fast procs and cheap prices. IA64 will take the serious business server market because of its superior proc design for handling *huge* apps like MS SQL. People buying IA64's probably don't care if they can play Death Match III when the IA64 market dries up.
Re:IA-64 anyone? (Score:1)
You whippersnapper! I remember when a 30 meg drive could cost $3,000.
Itanium CPU's cost WAY too much money (Score:2)
When the price of the Itanium 2 CPU is somewhere between US$1,000 and US$3,000, no wonder why there's not much interest nowadays. My guess is that AMD's X86-compatible chips using the Hammer core design will probably be at most US$550 to US$600 in price for the fastest versions.
Re:IA-64 anyone? (Score:2)
x86-64 are the only 64-bit games in town.
If you want the best performance money can buy
get a IBM power-4 system.
If you want tried and tested reliablity get
a SUN ultrasparc system.
If you want the most bang for the buck get an
Operton system, (AMD Hammer systems are still
going to be cheaper than pentium and xeon systems). (Of course you'll have to wait 5+ months, still for that)
And Itanium-2, err, i can't actually think of
business or server application that would be
best on an Itanium-2, compared to the others at
this point. One the plus side Itanium-2 has
got good FP perfomance but not as good as
Power-4 or the latest Alphas, but its real world Integer performance is still to weak for servers.
Re:IA-64 anyone? (Score:1)
Maybe not all developers are that keen on doing lots of tedious and possibly not that exciting work?
This was probably Linus' real message. He is not really that excited about the extra work. If you want to do it, then he probably doesn't care that much about what technology wins.
Gaute
Re:IA-64 anyone? (Score:2)
Or rather trade it against even more ugly baggage called "EPIC". Explicit parallelism is soooo '80. Putting all the intelligence in the compiler and still requiring enourmous amounts of silicon isn't really that great.
Now I'm all for a neater instruction set, but IMHO "EPIC" is all but neat. It's having software developers (compiler backend writers) do all the work. Since I'm a software developer (albeit no compilers) I don't find this idea good.
Oh, and the x86 is actually more than two decades old, not just one, and I agree it's quite ugly and should be replaced by something more beautiful - emphasis on beautiful.
My guess is that most people pushing x86-64 have yet to write a program more complicated than "hello world!".
That's worth a +1, Funny.
Re:IA-64 anyone? (Score:1)
Re:IA-64 anyone? (Score:2)
IA-64 wont get into consumer level equipment for a long long time yet. Intel isnt marketing it that way; they're pushing it as a replacement for Sparc and Alpha, probably wanting to create a new high-margin segment in the industry.
AMD's aspirations appear to be more consumer level with the x86-64, and as a fast way to get 64-bit it might be an alternative. It sure as hell beats extending memory in other ways on a 32 bit architecture.
Re:IA-64 anyone? (Score:2)
They're not backward compatible with processors a decade old. They're backward compatible with a processor two years old; which itself was backward compatible with the one two years before that; etc.
Sure, x86 is great due to all the applications out for it, but in all honesty why can't we move away from it?
You just answered your own question.
SuSE's work on supporting the Hammer (Score:5, Informative)
Roger Whittaker (SuSE Linux Ltd)
MandrakeSoft as well (Score:3, Informative)
The joint Press release (MandrakeSoft/AMD- June 27th) is available here [mandrakesoft.com].
Why the Hammer will come out first... (Score:1)
This is why the Hammer will come out first, Amd is smart enough to realize that the first few years of hammer, people will need to run some things that there are no 64-bit programs for yet.
Mad props to Amd for not having stuck their heads where the sun doesn't shine this time around!
Re:Why the Hammer will come out first... (Score:2)
Well, there's only one problem with your assertion... the Itanium already is out, and the Itanium 2 is close to release. OEMs are already building Itanium 2 boxes.
And for that matter, those Itanium 2 boxes are fast. On the SPEC CPU2000 benchmarks, the two fastest boxes are 1ghz I2s, and the next six spots are held by boxes running POWER4s (all running at >1ghz), Alphas, and a coulpe of SGIs. And there are a large number of vendors who have already committed to creating IA64 versions of their software from Microsoft to Oracle. Pretty much all of the big names have signed on.
Is anybody even planning on selling a server with Hammers yet? Has AMD even given anybody any silicon to play with? Intel was giving out development samples of the Itanium over two years ago. Intel might not have the reputation or experience of Sun or IBM with high-end servers, but they've certainly got more than AMD who have never had a successfull server line before. It's obvious you're a fan of AMD, but don't let your biases get in the way of reality.
Re:Why the Hammer will come out first... (Score:2)
Actually AMD has managed to take 10% of the low end server market with their MP chips.
Its not much of a foot in the door but its something that should allow them to push Hammer into the market.
Besides if all of your 32 bit code still works on the Hammer (and runs fast), along side 64 bit code, then what's to prevent this chip from making a good entry into this market?
Re:Why the Hammer will come out first... (Score:3, Interesting)
> boxes are 1ghz I2s
That's only true because of Itanium 2's floating-point performance. Real server workloads don't use floating point. For a slightly more realistic workload, look at the SPEC CINT numbers. There, the 1GHz Itanium 2 falls behind 2.4GHz Pentium 4s and Xeons.
Furthermore, none of these SPEC bencharks are nearly as memory-intensive as real server workloads. That's where Itanium really gets Hammered.
64 bit versions of current tech misses the pont? (Score:5, Insightful)
Hmmm. I'm probably more interested than most in the prospects of large address spaces, however I don't imagine typical web sites are where this technology will be best exploited. Think seriously, moving to 8 byte addresses has the following effects:
Do we need 64 bit (Score:2)
And also consider the fact that quite a few companies have 64 bits, Digital, Sun. And did the world change? Not really...
Re:Do we need 64 bit (Score:2)
Probably, thats why distributed computing is a discipline for itself.
> And also consider the fact that quite a few companies have 64 bits, Digital, Sun. And did the world change? Not really...
1960: And quite some companies have computers, and did the world change? Not really...
Same with Internet, 3D-graphics, cars...
There is a great difference between being available and being commodity.
Nonetheless, you are certainly right that switching from 32bit to 64bit won't revolutionise the world.
Re:Do we need 64 bit (Score:2)
Wrong.
Googles DB needs RAM. Lot's of it. More than 3GB if possible - with fast access. They don't need a lot of processing power (for their search stuff, I don't know exactly what voodoo they are working on also).
There are postings to lkml from google programmers which show that.
If they can get native 64bit adressing on a cheap plattform with just a recompile, they will do it.
good point (Score:2)
8 way SMP with decient
thread and process management
and reasonable security in the CPU instruction set.
mainstream SMP systems will change the world, mainstream 64bits won't(unless they add all of the above).
Remeber when 32bit came in (Score:3, Interesting)
The new insturusction and architecture improvements in 32bit x86 made for a good performance overhead.
The memory bus was twice wide on a 32bit system , so the pointers on the linked list may have been twice the size but becuase of the wider bus there was no performance hit.
One of the create benifits of 32bit was that you could have numbers +-64000 in one register, giving the greatest performance increase.
The extra wide bus is gonig to give some performance gains on 64Bit systems, but I don't see the extra address space or larger numbers being that benifitial.. Well maybe the extra address space will help with threading and process management, and mean that bloatware can be even more bloated.
Re:Remeber when 32bit came in (Score:2)
If you need it is another question, but anybody who really could profit from big memory has to shell out quite a buck. Hammer (Itanium) will change that picture dramatically.
If Hammer performs as it sounds, and is not too expensive, it will rapidily enter mainstream, because 64bit is even a better marketing buzzword than XXXXMHz, esp. since Hammer promises both.
Re:Remeber when 32bit came in (Score:2)
64 bit system might be great for enterprise servers but for the home? There would have to be some major software bloating going on for my home machine to use more than 4gb or so, a typical home user could probably live in 128mb at the moment without any performance problems. the current 4gb limit allows them to have 40times as much bumph hell you can even fit a dvd in 4gb of memory.
Re:Remeber when 32bit came in (Score:1)
That is why I can't wait for the dual Opteron boards to come out.
Re:Remeber when 32bit came in (Score:2)
The amount of data that you can fit into 4gb is so large that people would have to start using 3d data arrays to need any more.
It works like this....
A DOS PC has 640k enough for a text document and some vector graphics and a small personal database. At this point intime Bill probably couldn't imagine computers getting fast enough to need more memory.
A CD(640mb) can hold x books of text (as promoted).
A DVD(4gb) can hold x books of text, but as scaned images and with full audio.
Unless you start holding molecular or biological information on you PC or want pointless resolution on the images it will take a while before you fill 4gb. (you probably wouldn't run that kind of stuff on a mainstream PC anyhow)
That amount of data-requirement is probably 5-10 years off and there will have to be major SMP improvements in PC's for it to be pratical.
AMD and Intel should be pumping money into SMP developments instead of GHz and bit wars, that's where the future lies.
Re:Remeber when 32bit came in (Score:1)
in every way by a single processor ? (Score:2)
1: a single CPU has to sit in wait states at some point holding up everything, in a 2 CPU system one CPU can frequently continue (try running windows NT with 2CPU's there's a big difference in smoothness!), you can set the afinity of processes so that one CPU is always left doing the dirty work and the system doesn't get clogged up.
2: a single CPU has a single cache shared by everything, when it page faults it page faults,
In theory you should have less problems with page faults and cache misses on a 2 CPU system.
a very cut down example, here thread 1 has code with a lot of page faults, thread 2 is compleatly page alighed and doesn't page fault.
Thread 1 CPU 1 page faults causing a slowdown on CPU1
Thread 2 doesn't with no CPU slowdown.
On a 1 CPU system
Thread 1 page faults, swap to Thread 2 no page fault, swap to Thread 1 page faults.
3:You can have seperate databusses &co for each CPU giving you twice the memory bandwidth. etc.......
4: If investement was placed in the area (see cray, or mosix!) 2 CPU's would be a hell of a lot faster than 1, infact 2 CPU's would be a kids toy and home machines might have tens of CPUS/GPU's, with decient process/thread management and all the stuff that PC's should do. Why not max out SMP and parrell processing, there'd be a hell of a lot better AI's and smoother running machines out there if they did.
In 20 years time if my PC? isn't running at least 20,000 threads and at least 1,000 concurrent threads then I'm going to be upset.
BTW.
Why are you storing all your data in RAM? you should be using a fast HDD.
Re:Remeber when 32bit came in (Score:2)
Photoshop chews ram. And god help anybody who does video editing. Video editing could eat 4g very easily.
Re:And god help anybody who does video editing. (Score:2)
1) PCs are the fastest machines you can get until you get to workstations costing several times as much. None of the low end Sun or SGI machines can touch a good dual proc PC for performance.
2) PCs are pervasive. If you aren't rich and want to use your machine for some prosumer video editing, what are you to do? PCs allow people who otherwise could not enter this field to enter it.
3) RAM is dirt cheap.
Thus, if the only thing stopping PCs from being good low end video editing workstations is an artificial RAM limit, then why not move to 64 bits and get rid of it?
Re:And god help anybody who does video editing. (Score:2)
>>>>>>>>>
I don't know if PCs *aren't* proper hardware. PC platforms today are very powerful. Let's compare a dual processor Hammer setup to a dual processor Sun Blade 2000. The Hammer clearly wins in the CPU speed department, the UltraSparcs in the Blade already lab behind a high-end P4. The Hammer wins in memory bandwidth (5.4 GB/sec vs 2.4 to 4.8 GB/sec). The Hammer wins bus bandwidth (6.4 GB/sec per processor vs 4.8 GB/sec). The Hammer, if equiped with a relatively cheap Quadro 4 graphics card, is extremely competitive (within 10-20%) in the graphics department. So, for something like $6000 you can have a digital video workstation that rivals a Sun costing many times more. If Hammer were 32-bit, it would be a huge lost opportunity for the digital video market.
Re:Remeber when 32bit came in (Score:2)
Of course, chars are still 8 bits on 64-bit systems (and many applications do operate on chars). Also ints are often still 32 bits on 64 bit systems (at least that's the convention used for IA-64, use long for a 64-bit integer), and many applications do operate on ints.
Re:Remeber when 32bit came in (Score:2)
I believe (at least in the 16bit 32bit days) a int is a machine word be it 8 16 32 64 bits.
a long long is 64 bits
a long (in c) is 32 bits
a short is 16 bits
and a char 8 bits.
etither that or i'm going to have to put a load of #ifdef's in my code!!!
you should use int's for general paramiters because there faster(generally).
and the correct data type when needed.
Re:Remeber when 32bit came in (Score:2)
Well, you shouldn't assume anything about the size of int. For a quick overview of the int sizes, take a look at the gcc/config/arch/arch.h files in gcc and grep for INT_TYPE_SIZE. Many architectures (IA-64, PA-RISC, SPARC, etc.) use 32 bit ints even if the word size is 64 bits.
Re:64 bit versions of current tech misses the pont (Score:1)
Expanding the size of today's simple data structures. Consider, for example, a simple bi-directional linked list of 32-bit integers using a forwards and backwards pointer. A 32 bit arch has a 200% overhead, but 64 bit ach has 400% which should somewhat diminish expectations for magical performance!
This is a non-problem - memory is cheap, and if it is not cheap enough to store your linked list of a bazillion ints then you need to change your data structure or algorythms.
However, the biggest performance disbenifit from going 32->64 bits will cache misses. The major performance bottleneck on CPU intensive apps these days is how often you get a cache miss as opposed to a cache hit; by making things twice as big, you're cache will only hold half as many of them.
On those platforms which let you choose if you want to be 32 bit or 64 bit app on a per-application basis, I'm using this rule of thumb: "32 bits unless benchmarks prove me wrong, or I need the address space".
Re:64 bit versions of current tech misses the pont (Score:2)
Yes, we can argue that RAM is cheap... but as you eloquently point out, buying more RAM doesn't overcome all of the implications. Other bottlenecks exist, and I can think of several:
And, I'm sure that there are more:-)
Re:64 bit versions of current tech misses the pont (Score:2)
Re:64 bit versions of current tech misses the pont (Score:2)
Last, all this creates a nice new tech platform.. 64bit PCI slots (running on 133 or 64 mhz), and DDR333 ram..
All in all it will make more sence in the beginning to use all this goodness for I/O demanding applications (servers) but i am sure it will break through in the profesional graphics market soon enough, with the consumer market laging behind only a bit.
Also remeber, linux _needs_ 64bit computing.. while linux wasn't that sensitive to the Y2K problem, the 32bit time value used is gonna run out in the next 30 years.. native 64bit integers would mean you can use 64bits for your seconds since 1/1/1970, so keep linux running for a while longer
Re:64 bit versions of current tech misses the pont (Score:1)
Well, EROS [eros-os.org] does just this on 32 bit systems. (Thankfully), I haven't had to touch EROS much yet, so I don't really know how it handles it, though.
Of course, given there is no driver for hard drives, etc (and last I heard booting the kernel didn't work on systems with more than 256 megs of memory), the fact that it supports persistent state is not particularly useful. But someday...
2. Expanding the size of today's simple data structures. Consider, for example, a simple bi-directional linked list of 32-bit integers using a forwards and backwards pointer. A 32 bit arch has a 200% overhead, but 64 bit ach has 400% which should somewhat diminish expectations for magical performance!
That's just a bad data structure. What you want is to have each node of the linked list have a fixed size array (say 1-16K, depending on local circumstances), and a couple of extra integers telling you where the start and stop of the arrays are. This is much, must faster, and the memory overhead for the extra pointers (be they 32 or 64 bits) is quite small. It's also quite trivial to program.
Excuse me, the data structure I just described is for queues, not general lists (queues tend to come up more often than list for me so that's what my mind jumped to, I guess). But you see my point, I hope.
I can't really think of a case where a 400% overhead is too much, but 200% is OK.
Re:64 bit versions of current tech misses the pont (Score:2)
I feel there is a need to re-think the way in which resources are allocated (from a holistic perspective) before we can reap big benefits from a 64-bit architecture. To a large extent 32 bit programmers had it easy - any memory address is a single register value - which was far easier to manage than the previous generation of baroque memory models where programmers had to consider system level minutia in order to ensure their programs were efficient. (Read Bentley's "Programming Pearls" if you want a superb example.) This simplicity was, in my opinion, central to the overwhelming success of 32 bit processors.
In order to exploit 64 bit address spaces it is imperative that the conceptual model within which application developers wave their cabalist wands doesn't become polluted. At the moment, I feel the future lies in the widespread adoption of generics where additional settings (say specified maximum sizes for linked lists) would allow compilation to use short representations of memory offsets (in place of pointers) with the added advantage that this should force locality of reference and pave the way to "page-miss prediction" which promises still further performance advantages.
P.S. Thanks for the hint about EROS - I was aware of it existence. Another one worth a mention is POST, (which I played about with for a bit but then discarded as a nice idea with a flaky implementation.)
Re:64 bit versions of current tech misses the pont (Score:2)
IIRC, this *was* done in multics.
Large pointers into small blocks of storage seems wasteful somehow.
Re:64 bit versions of current tech misses the pont (Score:2)
Don't forget that the 64-bit data will be coming from a wide memory bus, so there is essentially no extra overhead for getting 64 bits at a time. For many data structures (for instance an array of 32 bit integers) there is no additional overhead.
However, your basic point is right, just as it was for a doubly linked list of shorts on a 32-bit architecture. Larger pointers, and some level of data bloat (ints will now be 64 bits, for instance) are to be expected.
So, it is not so much that "resource reallocation must be rethought", it is simply that many applications don't yet need 64-bit power. The immediate adopters will be areas like scientific processing/visualization, CAD/CAM/CAE and large databases (this is the enterprise server role AMD is hoping Opteron nails). CAD users have been hard up against current addressing limits, and will welcome the ability to handle larger models. A little extra bloat is in the noise, especially since the whole point is to address massive amounts of RAM. The SUSE implementation allows 512 GB of virtual address space per process, for instance. Hammer's SMP capabilties and scalable memory architecture are just more icing on the cake.
"Normal" users can buy Hammer systems and run 32-bit software/OSes just fine (faster than any P4), then upgrade to 64 bits when they need it. People will find ways to use all that power, natural speech interfaces come to mind. Games will probably push the 4 GB barrier sooner than you'd think as well.
By the way, the claim is that 64-bit code for the Hammer will run 30% faster than the equivalent 32-bit code. This is due to x86-64 having more general purpose registers among other things.
I think that x86-64 is a brilliant move on the part of AMD, and if Hammer performs as advertised AMD will take major marketshare and profits from Intel. I can't wait to get my hands on a system, myself. :-)
Re:64 bit versions of current tech misses the pont (Score:1)
In my opinion, in order for 64 bit architectures to reach full potential, a change of software structure is required.
This is probably what Hammer needs to succeed. (Score:2, Insightful)
You can only get anywhere if you have backward compatibility. Whilst Windows software will have to be rewritten for 64-bit execution, much of what exists on Linux should just recompile. AMD's decision to implement backward compatibility means that they will certainly be the choice of the home user, even if they don't make it big in the world of the office.
Re:This is probably what Hammer needs to succeed. (Score:2)
Of course it is, assuming your source is one 100 line C program.
-Kevin
Re:This is probably what Hammer needs to succeed. (Score:1)
No shit. I was disagreeing.
-Kevin
Re:This is probably what Hammer needs to succeed. (Score:1)
Re:This is probably what Hammer needs to succeed. (Score:2)
So all development was halted when the quit support for it years ago.
I am looking at my first copy of back office now. Six discs, with one set for Intel and one set for Alpha.
10 grand for 25 users.
Yikes
Puto
It only took 10 years :) (Score:3, Informative)
"Digital Equipment unveils the 150-MHz Alpha 21064 64-bit microprocessor". That was kind of one checkpoint, this year, I believe might be another.
Love linux, back Intel (Score:2)
If Intel wins then Windows is stuffed,
Because most of the Linux software is open source so I can recompile for Intel, on the other hand most of the Windows software I use is closed source.
gcc / linux / etc (Score:2)
What I am wondering about is how much of the code is not 64-bit pure, and who will take care of making it 64-bit pure in time for Hammer to be released. It is a real problem after all.
Everybody and their cat will support the Hammer when it finally arrives, linux, windows, (MacOS X?), bsd, (solaris??).
And about 16GB of memory. Either you put 4GB DDR333 SIMMs in your four slots, or you have a mobo with a lot of SIMM slots. Having a larger address range is great, but I wouldn't want it if I don't get more memory as well;) Let's hope the memory prices can fall a little again.
Re:gcc / linux / etc (Score:1)
But it is not just recompiling, there is a lot of work done for the compiler, binutils and the kernel.
For AMD Hammer most of this has been done by SuSE already, so there is not much work left.
Ciao, Marcus
Sun, Alpha Mac &co (Score:2)
If the code runs on Sun, Mac OS X, x86 and Alpha then theres a good change it will run on Hammer or IA-64 without any significant changes(if any).
You're missing the real need (Score:3, Funny)
Not only that Anonymous Coward, but with the amount of posts you make to
I remember... (Score:5, Funny)
I remember when 48K was considered overkill because you couldn't fill it
I remember when 360k was enough for software and data
I remember when I got a 20 meg hd for my XT "just in case"
I remember when I didn't wear these damn Depends. NURSE!!!
that 1U Newisys system (Score:2)
Don't expect a 64 bit OS to be 2x faster! ;) (Score:2)
IANAKE. (I am not a kernel expert, but this is my understanding of the situation.)
Sun incrementally worked its way up to 64 bits in the operating system. I believe first they offered 64 bit OS calls, then later moved the OS itself to 64 bits. Solaris 7 was, at least, the most visible transition, when you had a choice of installing a 32 bit OS, or a 64 bit OS.
What will surprise some people (and be intuitive to others) is that many applications actually ran a bit SLOWER with the OS in 64 bit mode. What? Yup. And for good reason, too.
The problem was that you had the overhead of a 64 bit operating system to run 32 bit applications. More overhead means less application performance. More work was required to do the same tasks.
And many applications are hard pressed to take advantage of 64 bit features. Its like putting a hot-rod engine into your daddy's Oldsmobile and keeping the original tranny. But yes, it works.
Mind you, there are applications which can take some more advantage of 64 bits, and the future in operating systems isn't 32 bits. So it is still good to have an operating system go that direction. It is just that for most people, there isn't a big WOW FACTOR when you go 64.
don't diss oldsmobile (Score:1)
> daddy's Oldsmobile and keeping the original
> tranny. But yes, it works.
That would probably be just fine. They had pretty good tranny's back then [musclecarplanet.com]
Re:Don't expect a 64 bit OS to be 2x faster! ;) (Score:1)
I guess that's a bit like running Win9x applications under WinNT/2K/XP - every string in every API call gets converted back and forth between Unicode.
Why should we support AMD again? (Score:3, Informative)
Here's what AMD is really thinking
Come on, people, really. Don't support AMD. They are not the noble David against the nasty Goliath. They are just as much a nasty Goliath themself, except for the fact that they don't have much market share... But they sure are acting like they do. If AMD and Intel keep pushing their 'Trusted Computing' wheelbarrow, I swear I will buy an underpowered Transmetta or even a fucking Macintosh just to avoid Palladium.
Re:Why should we support AMD again? (Score:2)
i think we should be mad at AMD because they want to make money. let's also forget the effort to make x86-64 accessable to open-source.
AMD realizes that the vast majority of the processors they sell end up running windows. it really wouldn't make sense for them to make something that would not support future versions of windows. it can still use non-TCPA OS (palladium is just the microsoft version of TCPA). but damn them for wanting to stay in business.
so you run intel now? that's hardly taking a stand.
it might also be that people like the features of the upcoming opterons. they can always just load whatever OS they want.
i realize this is just troll food, but let's try to have fewer meaningless boycotts
Re:SuSE vs RedHat (Score:1)
1. is a story about the redhat distribution
2. is a story about AMD's processor
IMHO the iconing is correct!
Re:SuSE vs RedHat (Score:1)
Re:The irrelevant chasing the uneatable (Score:1, Offtopic)
Re:proprietart (Score:2, Interesting)
we're stuck in 32-bit-hell for the rest of our days, and redhat turns into the new M$ -- evil and agressive...
oh... wait... it's linux... move along people... nothing to read here...