Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software Technology

Linux Clustering 154

An anonymous reader writes "Beowulf clustering turns 10 years old, and, in this interview, creator Donald Becker talks about how Beowulf can handle high-end computing on a par with supercomputers."
This discussion has been archived. No new comments can be posted.

Linux Clustering

Comments Filter:
  • news? (Score:5, Insightful)

    by dan2550 ( 663103 ) on Monday September 13, 2004 @03:48PM (#10238962) Homepage
    i dont mean to sound like a troll or anything, but is this really news. over the last year or so, (nearly) all of the articles on /. about fast computers have been clusters.
  • by corvair2k1 ( 658439 ) on Monday September 13, 2004 @03:54PM (#10239051)
    ...can be simple. The more complex a problem gets, the more likely you need one supercomputer as opposed to a cluster. It's not elitism, it's just that the problem will probably require a lot of communication between processors.

    Any kind of networking solution between computers will never be as fast as a hard-wired bus can be. If a lot of communication between nodes is required, you will spend more time waiting than computing, which shoots efficiency to hell.
  • by monoi ( 811392 ) on Monday September 13, 2004 @04:06PM (#10239193)

    The more complex a problem gets, the more likely you need one supercomputer as opposed to a cluster.

    I'm not sure it is that simple. For some problems (e.g. Monte Carlo [wikipedia.org] simulations), a more complex problem means more individual nodes are required, with very little inter-node communication. For other kinds of problem (finite element methods, maybe?), you're probably right.

    In other words, the physical structure of the solution depends on the kinds of algorithms that you intend to run: there's not just one `correct' answer.

  • Re:BlueGene (Score:5, Insightful)

    by jamesdood ( 468240 ) on Monday September 13, 2004 @04:17PM (#10239303)
    Since I administer a fairly large cluster, I can say that the answer is "It depends" (Of course that is ALWAYS the answer!). It depends on the codes being run, it depends upon the interconnect optimization.(yes myrinet is fast, but the real key is that it has much lower latency and this has to be engineered carefully if using more than one switch) My cluster runs both myrinet and Gig/E, some codes run well on the the ethernet interfaces (take codes like mpiblast for instance) while others (NAMD comes to mind) run faster on the myrinet. However this machine may be fast, but I have some large SMP boxes (IBM P-series) that cycle for cycle SMOKE the performance of the x86 boxes. But you have to remember that the cluster computers cost about $3000 /node while the SMP boxes with a similar config cost about $13,000 apiece, and even more if you want a box that supports more than 8 CPUs (think 1 million and up)
    So once again, it comes down to the types of jobs, and how much you are willing to pay to get those jobs done in a hurry! A Cluster is still great, I have just completed some jobs that consumed over 12 years of CPU time, in 1 week of wall-clock time!
  • Re:BlueGene (Score:3, Insightful)

    by Christopher Thomas ( 11717 ) on Monday September 13, 2004 @04:24PM (#10239374)
    All this sounds good and Interesting, and Becker did a tremendous ammount of development in this field. But I was just wondering, what about supercomputers like BlueGene/L which have very fast interconnects. Many supercomputers/distributed systems run MPI based programmes and such programmes need a high interprocess commmunication does anyone one know how good these are in a Bewoulf Cluster?

    Anywhere from "terrible" to "almost not bad", depending on how much you're willing to pay for the interconnect network. The point of Beowulf-style clustering is low cost/node, allowing scientific computing to be done with commodity hardware (unheard-of at the time). While using something like Myrinet instead of Ethernet, and careful topology layout, can bring you to the "almost doesn't suck" stage, you'll still suffer heavily in communications-bound problems.

    Fortunately, there are many interesting problems with low enough communications load to make commodity technology based clusters very, very useful.
  • Why Beowulf? (Score:3, Insightful)

    by Trogre ( 513942 ) on Monday September 13, 2004 @05:24PM (#10240022) Homepage
    If you maintain a group of networked but otherwise independent computers for example a student lab or office farm, consider deploying something like PVM or MPI. It's a great way to get some use out of those idle cycles.

    PVM at least scales incredibly well: 25 machines rendering a povray scene take just a fraction over 1/25 the time taken to render it on one machine. I haven't tested MPI yet.

Scientists will study your brain to learn more about your distant cousin, Man.

Working...