North America's Fastest Linux Cluster Constructed 325
SeanAhern writes "LinuxWorld reports that 'A Linux cluster deployed at Lawrence Livermore National Laboratory and codenamed 'Thunder' yesterday delivered 19.94 teraflops of sustained performance, making it the most powerful computer in North America - and the second fastest on Earth.'" Thunder sports 4,096 Itanium 2 processors in 1,024 nodes, some big iron by any standard.
Google Cache (Score:2, Informative)
Re:Whoa. (Score:4, Informative)
Re:vs google (Score:5, Informative)
The GFS article that appeared a while back said they used standard 100MBit ethernet, this is not going to get you a good score in any supercomputer benchmark.
Wow (Score:2, Informative)
Re:The way I see it... (Score:1, Informative)
Re:It's all about sticking it to the mac. (Score:3, Informative)
Re:The way I see it... (Score:2, Informative)
That's a wildly inaccurate summary of the landscape of RDBMS clustering technology.
Problem is, that's not what we are talking about here.
So the answer to your question at this end is almost certainly "none of the above" or probably more correctly "some bits of all of the above". Functionally most of the kind of stuff you do here doesn't need shared concurrent access to the same data files however for simplicity of implementation they probably nevertheless run GPFS so that all nodes can see all files.
Re:The way I see it... (Score:2, Informative)
Re:Very great and all... (Score:5, Informative)
Compared to a Xeon or AthlonMP cluster, the Itanium faired poorly in price/performance. The only reason to use Itaniums was if you needed 64 bits for more than 4GB of memory, or needed high single CPU performance for a pooly parallized application. (Of course if your application parallizes poorly, a cluster is probably a bad choice to begin with). Then Opterion came out and changed all that. It's 64 bits, it's fast, and it's a fraction of the price of the Itanium2.
I just purchased a new Beowulf cluster. The decision was between Xeons vs Opterons. The Opterons had better price/performance, but the Xeons would fit in better with our existing Pentium3 Beowulf, other ia32 servers, and existing software. In the end, we went with Opterons. Itanium2 was never even in contention. Just one look at the price and performce of a Itanium2 system was all it took to cross it of the list.
Re:"Most" powerful (Score:5, Informative)
According to Quadrics latest price list, the cards are $1200 each, $913 per port for a 64 node switch, and $185-$265 for a cable. That's $2300/node.
Myrinet cards are $595, the switch is $400 per port for 64 nodes, and the cables are ~$50. That's $1050/node.
Quadric's price for a 1024 node interconnect is $4,176,094. That's hardly chump change. The bandwith is about 10x higher than gigabit ethernet, and the latency about 100x lower.
Re:Very great and all... (Score:2, Informative)
Nope. But they can't do what google does either. (Score:3, Informative)
You have several types of clusters, each are designed to do a specific task, although you can easily mix-n-match for different purposes.
1. Server clusters. Bunches of machines running together, providing services that compliment each other.
For example you have a file server that is mirrored to another that is hooked up to a different part of a Lan/Wan backbone in order to improve service. Lot's of databases are clusters like this.
2. High avaiblity clusters.
You have a machines that are backups of other machines. If one machine fails a backup is activated instantly and replaces the failed machine without ANY loss in services.
Sort of like a RAID harddrive setup. Hotswappable computers, that sort of thing.
Google is the first 2 types. It has several clusters with nodes. Each node is made up of a few computers, if a node fails then another backup can back it up instantly, giving the techs time to correctly fix the issue. The computers each take some of the burden, too, so that it seems that they would have to be running mega-machines to provide the performance when in reality they just run a bunch of PC-style computers.
3. Computational clusters. Clusters that are designed to pool their resources to create a single big computer that is used to proccess large amounts of data and intense mathmatical functions.
2 types of these are Beowolf clusters and OpenMosix clusters.
OpenMosix cluster is easy to setup if your a little bit familar with linux and even have knoppix cluster cdroms you can build ones quickly and easily.
Beowolf is used for big number crunching and programs that use it are generally written to run a specific cluster, although libraries and tools are portable.
Used lots in astromony for example. 10-12 PCs in a college lab can make a nice number crunching machine.
There are some clusters that do all 3, lots can do only 1 or 2 of the types easily. Different types can compliment each other.
Re:Very great and all... (Score:3, Informative)
Re:Very great and all... (Score:3, Informative)
Some reps from SGI came to my LUG [golum.org] the other day, and talked about their clusters and supercomputers. The guy doing the Q&A said that he personally liked the Opterons and x86-64, and that the Opterons were fast, but for what SGI does they preferred Itanium. The Opterons have their memory controller embedded in the chip itself, which is great for 1 or 2 or even 8 processors. However when you go up to a 512 processor single-system image supercomputer like SGI's Altix, a lot of the memory controller stuff is done in the switches or otherwise off-chip. Itanium allowed for more flexibility in how they did memory controllers, because they don't have an on-chip one.
There were some other reasons too, like having more registers, etc. that made SGI choose Itanium over Opteron. I don't know how applicable they are to this situation, as this doesn't seem to be a SSI supercomputer.
Re:Very great and all... (Score:4, Informative)
Re:before everyone starts shouting at once... (Score:5, Informative)
I just coded some IA-64 assembly and from what I've seen, this comment is dead-on. They've got a lot of interesting features:
If you just have a simple sequence of operations, each dependant on the one before, you can't really take advantage of these capabilities. (My code was like this. Even though performance wasn't my reason for writing assembly, it was a little disappointing that I couldn't play with the new toys.) If you're expecting these features to make Word start faster, you'll probably be disappointed.
But if you're doing intensive computations in a tight loop, you can do amazing things. If you can get all the execution units working simultaneously, it will fly. And the features like rotating registers are designed to make that possible. You need a very good compiler or a very smart person to hand-tune it. You may need to recompile to tune if your memory latency changes (affecting how many iterations to run at once) or they come out with a new chip with more sets of execution units. But in a situation like this, none of that is a problem. They'll have applications designed to run as fast as possible on this machine. They may never be run anywhere else.
Re:LLNL's usefulness (Score:3, Informative)
That's thermodynamics. It's true for any fuel. It's even true for oil and nuclear energy - the difference being only that the energy wasn't put in during our lifetime. (And in the case of nuclear, that the pre-existing energy is all but inexhaustible.)
Re:Very great and all... (Score:5, Informative)
The Opteron 248 is $670 on pricewatch, while the 1.5 GHz It2 is $5200! The motherboards are like $1400 vs $400.
You have to keep in mind that this isn't a single machine, it's a cluster. You could take the money spent on an Itanium2 cluster, and buy an opteron cluster with five times as many processors. I am well aware that one does not get perfect scaling. But if you are running something on a cluster in the first place, I have a hard time imagining something that is faster with one fifth as many 27% faster processors. Yes, there are codes that would be faster on 1000 Itanium2 vs 5000 Opterons, but you would never runs these on cluster, because they would be faster still shared memory system.
Re:Very great and all... (Score:4, Informative)
There is a limit to how much you can effectively parallelize many problems. If that limit is 1, then you need a Cray or something.
Well, Crays are also parallel computers, so they won't help you much in this situation. Some Crays do have vector processors, but that is also a sort of parallelism. It's just that you use that parallelism through tuned BLAS libraries or with a vectorizing compiler (e.g. Fortran 95, HPF and such things), instead of doing it manually with MPI or threads or something like that. So if you're problem is totally serial, a vector processor won't help you either.
(Or you can just take the google route and let it fail and replace the whole box. But that really requires your whole application to be written to accomodate it.)
Not necessarily. Most supercomputers are not used to run a single job taking months, but rather they run lots of smaller and shorter jobs. On the p690 cluster where I do my stuff, I (and apparently most users) mostly run jobs using about 8-16 cpu:s , with a runtime of a few hours to a day. If one node would fail, the jobs that are executing on that node would also fail. It's no big deal, just resubmit the job to the queue when you get around to it.
Of course, if you're programming one of the very few and far between applications that has a runtime of months, you certainly want to save intermediate results once in a while. Not only to guard against hardware failure, but also so that the user can check the intermediate result and see if the app is still on the right track. It would be quite a bummer to use months of cpu time only to realize the entire thing is wasted because you specified the initial values wrong..