Linux Clustering 154
An anonymous reader writes "Beowulf clustering turns 10 years old, and, in this interview, creator Donald Becker talks about how Beowulf can handle high-end computing on a par with supercomputers."
If a train station is a place where a train stops, what's a workstation?
On par? Yes and no (Score:5, Informative)
Quote from the article: *snip!*
Re:On par? Yes and no (Score:2, Informative)
It's not just about speed and massively parallel (Score:5, Informative)
To be considered a "supercomputer," it also needs enough CONTIGUOUS MEMORY SPACE to hold the massive amounts of data associated with true "supercomputing." So far, no cluster has met that requirement.
Cluster Schedulers (Score:2, Informative)
http://gridengine.sunsource.net/
Otherwise known for... (Score:3, Informative)
Re:Just out of curiosity... (Score:1, Informative)
openMosix (Score:3, Informative)
Re:It's not just about speed and massively paralle (Score:2, Informative)
However, problems that are embaressingly parallel can be handled by a cluster very adequately for a fraction of the cost of a traditional supercomputer. I don't know that you can ignore this class of problems and say that clusters aren't "true 'supercomputing'".
FUD, FUD, and wrong. Crey stockholder eh? (Score:5, Informative)
Let's address this first: there are two common memory architectures, distributed memory (a cluster) and shared memory (a 'traditional' supercomputer). Each can emulate the other. Saying a cluster doesn't have enough memory, presumably at each node, is really saying: "I don't really understand message passing."
This would be more important if datasets were actually large. Unfortunatly for your argument they aren't. A handfull of nodes and they'll hold the whole simulation easily in memory (albeit it'd take years to run because there's so few CPUs at work.)
How would I know? Well, I work with the Center for Simulation of Advanced Rockets aka CSAR at UIUC, one of five DoE ACSI sites in the country. I manage their supercomputer, which is getting upgraded from 200 P3-class dual proc PCs to 640 dual proc Xserve G5s. Before that I was a grad student working with them, albeit not on the CSAR simulation but instead on a related grant, the CPSD.
Now, there are computing problems which clusters aren't good at (or at least that's the traditional claim. My master's thesis and advisor would seem to dispute that this is actually the case.) However, most problems as the interview says, run just fine on clusters. Physical simulations (which covers CSAR's rockets to the national labs nuclear weapon research to hurricane/weather simulation, all the way down to protein folding and atomic and sub-atomic scale crystal formation simulation) need to know about what's in the area you're working on, and what's in nearby areas.
Occasionally you'll find an oddball like galactic simulation (or molecular dynamics) that needs to compute gravity across the whole universe. Fortunatly we have multigrid methods and a friendly gravity equation to solve this problem: get real data from those near you. Average those far from you and use that instead.
Then of course there's the idea that even "traditional" supercomputer problems that don't run well on clusters can be run efficiently on clusters IF you move beyond 1 process per CPU. Load up 10, 20, 100, 1000 little workers on a processor. Get fast context switching between them (not OS level!). Use message passing rather than shared memory (locking, ick!) to communicate. One worker blocked waiting for network data? Process the next one! If you've tuned things right you'll find you always have work to do.
Sounds crazy? Supercomputing '02 didn't think so: http://charm.cs.uiuc.edu/research/moldyn/
clusterknoppix (Score:2, Informative)
Re:Just out of curiosity... (Score:3, Informative)
http://www.rocksclusters.org/ [rocksclusters.org].
Quite a few people have built Rocks clusters out of a bunch of old computers.
Disclaimer: I work with the folks who created this.
SearchEnterpriseLinux (Score:2, Informative)