Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software Technology

Linux Clustering 154

An anonymous reader writes "Beowulf clustering turns 10 years old, and, in this interview, creator Donald Becker talks about how Beowulf can handle high-end computing on a par with supercomputers."
This discussion has been archived. No new comments can be posted.

Linux Clustering

Comments Filter:
  • by FortKnox ( 169099 ) on Monday September 13, 2004 @02:48PM (#10238959) Homepage Journal
    ... the amount of replies that will start with the same subject header as mine and not be funny at all?

    I sure can!
    • by Anonymous Coward
      Not really (well apart from your post of course), tell me, how would it go?

      "Hey, can you imagine a beowulf cluster of er... beowulf cluters?"

      Doesn't quite work, you see.
    • In honor of the Beatles/Apple topic a while ago, I was inspired to alter the heading slightly (sorry).

      Imagine all the clusters
      It's easy if you try
      With Linux on them
      Computing Particles in Sky

      Imagine all Beowulf
      Crunching in harmony (AhIahahah...)
    • i want you to know that i seriously thought about doing that and having had the same thought, decided against it.

      now, imagine a beowulf cluster of slashdotters having that same thought... :>

      ed
    • And I can already see the beowulf cluster of responses like yours ;)
    • by Sean Johnson ( 66456 ) on Monday September 13, 2004 @04:39PM (#10240210)
      - Can you imagine a Beowulf cluster of these?
      - How long until the RIAA sues them into oblivion once they find out how may MP3's you can put on one?
      - "Can you put Linux on it?" or "Yes, but will it run Linux?"
      - "Yeah, but does it run Doom3?" or "And it still won't run Doom3."
      - Any comment regarding "Duke Nukem Forever" taking literally 'forever' and being termed 'vaporware'.
      - I am not buying one until they support ".ogg".
      - I for one welcome our new (insert name of company mentioned in post or story) overlords.
      - "George Lucas raped my childhood" or "Greedo shoots first" comments on any story incorporating the Star Wars franchise.
      - A comment including these 3 components in any order: Natalie Portman, naked and petrified, hot grits, one's pants.
      - Microsoft = Evil, MPAA = Evil, RIAA = Evil; with anything else incorporated to try and fit those equations into the topic at hand
      - Some type of reference to the size of one's ProN collection, the amount of ProN that can be stored on the gadget or technology in question, or the ProN industry itself being the first to make good sue of the new technology or gadget in question (ergo: the ProN industry drives technology)
      - The posted cliché being self-described as an "obligatory" post in the heading area if that particular cliché had not been addressed yet by previous slashdotters. (e.g. "obligatory Beowulf cluster comment")
      - Post revealing the fact that the story's homepage had been slashdotted already, culminating towards another post later on with the homepage story itself being copied & pasted verbatim (often with a subsequent post purporting that this is karma whoring, even though the poster admits it is indeed helpful anyways.)
      - Remark on the size of some new storage advancement about how many LOC's (Library of Congresses) can fit on it, or any other remark noting how this can be an actual valid unit of data storage measurement.
      - A variation of the Zero Wing video game intro dialogue regarding it's broken English translation: "Someone set up us the base....we have every ZIG, make your time".....blah, blah, blah.
      - Very soon lists such as this will be clichés as well.
      - Similarly noted and additional clichés may be found here: http://en.wikipedia.org/wiki/Slashdot_subculture
      • - Can you imagine a Beowulf cluster of these?
        - How long until the RIAA sues them into oblivion once they find out how may MP3's you can put on one?
        - "Can you put Linux on it?" or "Yes, but will it run Linux?"
        - "Yeah, but does it run Doom3?" or "And it still won't run Doom3."
        - Any comment regarding "Duke Nukem Forever" taking literally 'forever' and being termed 'vaporware'.
        - I am not buying one until they support ".ogg".
        - I for one welcome our new (insert name of company mentioned in post or story) overlord
      • - A comment including these 3 components in any order: Natalie Portman, naked and petrified, hot grits, one's pants.
        Actually, the Natalie Portman and Hot Grits memes were not originally connected (although they did seem to arise at about the same time...)
      • ...and dont forget:
        BSD is dying!!
      • It's p-r-ZERO-n.

        pr0n.
  • news? (Score:5, Insightful)

    by dan2550 ( 663103 ) on Monday September 13, 2004 @02:48PM (#10238962) Homepage
    i dont mean to sound like a troll or anything, but is this really news. over the last year or so, (nearly) all of the articles on /. about fast computers have been clusters.
    • Re:news? (Score:3, Funny)

      by strictfoo ( 805322 )
      Great news too: creator says creation is really good!

      In other news: "Ford says their cars are just as good as BMW's and Emachine states their computers rival Apple's"
      • Re:news? (Score:1, Offtopic)

        by EvilAlien ( 133134 )
        I'm sure Emachines do rival Apple, at least in terms of desktop penetration.

        I'm also sure that Ford kicks BMW's ass in terms of popularity, at least in the USA.

        I'm going to have to work something about the prevalence of bad taste into my Stupid People Theory.

        • And quality? (Score:1, Offtopic)

          by Gordonjcp ( 186804 )
          Although to be honest, any BMW made since the mid-90s has been an utter piece of crap. Not even BMW garages will touch them. The 735i with that fucking stupid iDrive thing just takes the Chocolate Homewheat, it really does. What a bag of shit.
          • Re:And quality? (Score:1, Offtopic)

            by strictfoo ( 805322 )
            The 735i with that fucking stupid iDrive thing

            that's the 745i, not the 735i, I agree with you that it's a piece

            Although to be honest, any BMW made since the mid-90s has been an utter piece of crap.

            I know the new thing is to bash BMW and Mercedes (well, not really new, but gaining popularity). That's fine. I never was a big BMW fan either until I got to drive one a lot.

            And, having spent a lot of time driving both a 1999 and 2003 BMW 540i I would have to whole heartedly disagree with you.

            0-60 in 5.9
            • The ride quality is abysmal. The least little bump or ripple in the road is transmitted through the rock-hard suspension direct to your arse, making driving at any speed above a walking pace feel like a trip on a malfunctioning rollercoaster. No sense having that kind of speed if you can't use it because the car won't stay on the road when you hit bumps.

              Maybe I'm just biased, because I've only really driven big old Citroens with the hydraulic suspension for the past five years. You can run over speed bu
    • Clusters are the slowest computers available...

      If your metric is moving around data, as opposed to how many no-ops you can do a second while waiting for your data to get there.
    • Clusters is power for the people.
      It's something that simple people, without a white form, a microscope, and 500 million budget, can work on, and make better.
      Undoubtfully many advances that have been made for clustering, will be used in many other aspects of computing. Even supercomputer will benefit from them.
  • imagine (Score:3, Funny)

    by Anonymous Coward on Monday September 13, 2004 @02:49PM (#10238972)
    awww, fuggit
  • by KrackHouse ( 628313 ) on Monday September 13, 2004 @02:50PM (#10238983) Homepage
    I'm picturing the ten candles on the Wolf-cake in close proximity with frosting interconnects and one big flame in the middle.
  • by crawdaddy ( 344241 ) on Monday September 13, 2004 @02:50PM (#10238996)
    Happy Anniversary to the most over-used joke on Slashdot. I'll be wearing my tin-foil hat all day to commemorate it. (The 10th anniversary is the aluminum/tin anniversary)
  • BlueGene (Score:4, Interesting)

    by a3217055 ( 768293 ) on Monday September 13, 2004 @02:52PM (#10239021)
    All this sounds good and Interesting, and Becker did a tremendous ammount of development in this field. But I was just wondering, what about supercomputers like BlueGene/L which have very fast interconnects. Many supercomputers/distributed systems run MPI based programmes and such programmes need a high interprocess commmunication does anyone one know how good these are in a Bewoulf Cluster? thanks a3217055 They said that of all the kings upon the earth he was the man most gracious and fair-minded, kindest to his people and keenest to win fame. :-The Geats' tribute to Beowulf after his death.
    • Re:BlueGene (Score:5, Insightful)

      by jamesdood ( 468240 ) on Monday September 13, 2004 @03:17PM (#10239303)
      Since I administer a fairly large cluster, I can say that the answer is "It depends" (Of course that is ALWAYS the answer!). It depends on the codes being run, it depends upon the interconnect optimization.(yes myrinet is fast, but the real key is that it has much lower latency and this has to be engineered carefully if using more than one switch) My cluster runs both myrinet and Gig/E, some codes run well on the the ethernet interfaces (take codes like mpiblast for instance) while others (NAMD comes to mind) run faster on the myrinet. However this machine may be fast, but I have some large SMP boxes (IBM P-series) that cycle for cycle SMOKE the performance of the x86 boxes. But you have to remember that the cluster computers cost about $3000 /node while the SMP boxes with a similar config cost about $13,000 apiece, and even more if you want a box that supports more than 8 CPUs (think 1 million and up)
      So once again, it comes down to the types of jobs, and how much you are willing to pay to get those jobs done in a hurry! A Cluster is still great, I have just completed some jobs that consumed over 12 years of CPU time, in 1 week of wall-clock time!
      • I'd mod the parent up if it weren't at 5 already :)

        He's pretty much dead on the money there. "Beowulf" in the strictest sense doesn't have Myrinet though, only commodity parts like 100 or 1000 Mb Ethernet. In these configurations, any latency bound application will be horrible (typically fine grained parallelism, lots of messages, typically small, being transferred). The latency of 1GbE vs 100MbE isn't that much different and both are an order of magnitude or more slower than Myrinet or any of the high
    • Re:BlueGene (Score:3, Insightful)

      All this sounds good and Interesting, and Becker did a tremendous ammount of development in this field. But I was just wondering, what about supercomputers like BlueGene/L which have very fast interconnects. Many supercomputers/distributed systems run MPI based programmes and such programmes need a high interprocess commmunication does anyone one know how good these are in a Bewoulf Cluster?

      Anywhere from "terrible" to "almost not bad", depending on how much you're willing to pay for the interconnect netwo
    • Blue Gene is an extremely clever design in that it uses several interconnect networks all at once. The main memory-memory interconnect is a packetized load-store interconnect arranged in a 3D mesh. Each node also has an ethernet tap for the management network, and a very wide tree network for all-reduce calls. They built their networks with MPI in mind.

      The difference between a commodity cluster and something like blue-gene is only a half-step. The codes that run well on blue-gene are MORE like the codes ru
      • The codes that run well on blue-gene are MORE like the codes run on clusters than those on a traditional vector super.

        And if you code your application for MPI you can debug/test/optimize it on a cheap cluster. THEN when you start running into communication latency and problems too large to be solved on commodity hardware you can recompile your code on big(ger) iron, like Blue Gene/L.

        Paul B.
  • by Weaselmancer ( 533834 ) on Monday September 13, 2004 @02:53PM (#10239028)

    ...doesn't it to you? I mean how long have you been sick of the "imagine a beowulf cluster of those" comments? Doesn't seem like only 10 years would make me that sick of it.

    • I mean how long have you been sick of the "imagine a beowulf cluster of those" comments? Doesn't seem like only 10 years would make me that sick of it.

      Depends... it only took 4 weeks for Floridians to get real sick of hurricanes. :-)

      • Depends... it only took 4 weeks for Floridians to get real sick of hurricanes. :-)

        About 24 hours, actually. The first night was ok because the power had only gone out a few hours previously and it was still really windy out. As a result, the house remained cool.

        By the following night, the winds where gone, the house had been without an air conditioner for 24 hours and it was really, really humid. After 7 days, there are no words to describe how thoroughly fed up I was with Hurricanes :)

        Then, two week
  • Passé? (Score:2, Interesting)

    by Kurt Wall ( 677000 )
    Could it be that Beowulf clusters, however cost-effective and powerful they have become, are passé now that most universities and research institutions have some sort of COTS-based high-performance computing solutions? Not that Beowulf isn't cool - it is - it just doesn't seem as cool as it used to.
    • You mean, Beowulf is dying?
    • Cool. Hmm. Let me see , 200-300W per node * how many nodes. Hot . Very hot.

      It's possible (given how powerful GPU's in graphics cards have become) that one day we will get to see
      *smaller* clusters as all of that "wasted" power in the GPU gets reused for crunching.

      But, Don Becker didn't invent this stuff. If anything
      I'm more grateful that he was masochistic enough to practically be a one man code engine creating all of the ethernet support for linux...

      The first "cluster" I read about was one in Byte
      a long
  • by corvair2k1 ( 658439 ) on Monday September 13, 2004 @02:54PM (#10239051)
    ...can be simple. The more complex a problem gets, the more likely you need one supercomputer as opposed to a cluster. It's not elitism, it's just that the problem will probably require a lot of communication between processors.

    Any kind of networking solution between computers will never be as fast as a hard-wired bus can be. If a lot of communication between nodes is required, you will spend more time waiting than computing, which shoots efficiency to hell.
    • by monoi ( 811392 ) on Monday September 13, 2004 @03:06PM (#10239193)

      The more complex a problem gets, the more likely you need one supercomputer as opposed to a cluster.

      I'm not sure it is that simple. For some problems (e.g. Monte Carlo [wikipedia.org] simulations), a more complex problem means more individual nodes are required, with very little inter-node communication. For other kinds of problem (finite element methods, maybe?), you're probably right.

      In other words, the physical structure of the solution depends on the kinds of algorithms that you intend to run: there's not just one `correct' answer.

  • imagine (Score:5, Funny)

    by justforaday ( 560408 ) on Monday September 13, 2004 @02:55PM (#10239059)
    imagine a lone computer sitting by itself not connected to anything...
  • Winterware (Score:2, Interesting)

    by Eberlin ( 570874 )
    If you could imagine a...ok, well maybe after ten years, we all could. The horse has been so beaten and tenderized that even takko vell wants a piece of the action.

    I've never seen a beowulf cluster personally. I've never run anything on one. However I do know that it made "supercomputing" more affordable. That in itself is a feat -- and a primary goal of most Open Source software. A proverbial "Hats off" to the open source hackeres out there. Thanks...and keep hacking.

    Now if I can gather enough old
    • While with a lot of 486s you would have a fair amount of processing power, if a program attempts to use instructions introduced into later CPUs, chances are it'll not work.
  • In Soviet Rus-- oh damn. I pulled out the wrong dead horse.
  • On par? Yes and no (Score:5, Informative)

    by grape jelly ( 193168 ) on Monday September 13, 2004 @02:59PM (#10239099)
    Beowulf clusters have never been the fix-all solution to pricey supercomputer needs. Traditional mainframe supercomputers will forever have their niche in computing that can't just be muscled through sheer volume of vector processes (i.e., processes in which good latency is essential). Even the creator of the Beowulf cluster agrees:

    Quote from the article: *snip!*
    Supercomputer vendor Cray has created a new product that is designed to compete with some Linux clusters. Cray Canada CTO Paul Terry said that Linux clusters really can't compare to a supercomputer. What is your take on Cray's moves against Linux?

    Becker: They are simultaneously saying that Linux clusters are not high-performance computing systems while introducing a product to compete with Linux clusters. They clearly saw that a large part of their customer base was moving toward commodity clusters, Beowulf-class clusters, to do high-end computing.

    Clusters can't replace all of the workload being done by supercomputers today, but it can replace the bulk of the traditional vector supercomputers. There is always that 10% of the market that won't run well on clusters, and that is the market that Cray is in. We are happy to solve most of the problems of the world and run most of the applications and play in our part of the marketplace.
    • yeh - sometimes, you actually need the pure performace that a vector processor will give you, without the initial overhead of paralelizing a process to run loosely-coupled on a beowulf.

    • Cray has sold linux clusters before, and now have 2 products that use linux in some way. (Red Storm and the XD1) They even have done some experiments runnint linux on the X1 vector supercomputer. Cray certaintly isn't making moves against linux. They would just prefer you to run linux on their mpp box, rather than a rack full of Dells.

      Mr. Becker has an interest in you using a penguin computing setup, rather than either Dell, or Cray. I must, however, admire the way he didn't get sucked into the interviewer
    • For around about the last five years, it didn't matter whether clusters were the best or even a reasonable solution to a number of problems, which I'm not sure was only 10% of the then market.

      AFAIK, the cluster proponents sold the NSF and the DoD's HPC office on the idea that they would solve the limitations of "pile of PC" systems in software, the result being that both organizations basically mandated clusters for all new projects. Imagine the CIO of an aerospace firm requiring WinNT henceforth for any a

    • Cray's new product IS a Linux cluster!

      Only advantage it (currently) has is a custom HyperTransportInfiniband bridge.

      And yes, Cray is trying to claim it's different than a "Linux cluster".
  • I was wondering if it is possible to make some sort of cluster out of old computers I have lying around? Nothing spectacular, just hooking up 3-4 old P2's to make a game server or something of the sort. Is there software out there to do this?

    Has anyone had any experience with this?

    Just a thought...

    • by Anonymous Coward
      ClusterKnoppix [bofh.be] may be just what you're looking for...
    • ClusterKnoppix [bofh.be] may be just what you're looking for...

      Cool... thanks. Thats exactly the kind of thing I was looking for. Does anyone have any experience with this? I was wondering what kinds of applications would benefit from a small cluster of relatively slow processors? For example, what single processor would be equivalent to a cluster of 3 P2 300MHz? It sounds like it could be a fun cheap project and a cool way to see how this stuff works on a small scale.

      • What single processor would be equivalent to a cluster of 3 P2 300MHz?

        May a 500-600 MHz CPU. Tops. Seriously. Mainly because of the following - that old computer probably has a 10Mbit NIC, maybe 100Mbit. Can you say latency?

        If the game server you want to run is multi-threaded then you _might_ be able to run different threads on different nodes (using say OpenMosix). It'll probably be slow as hell because of the latency. Probably slower than running it on machine.

        Look, clusters are good for running paral
    • You might take a look at Rocks:
      http://www.rocksclusters.org/ [rocksclusters.org].

      Quite a few people have built Rocks clusters out of a bunch of old computers.

      Disclaimer: I work with the folks who created this.
  • by Wizzy Wig ( 618399 ) on Monday September 13, 2004 @03:09PM (#10239217)
    processing...

    To be considered a "supercomputer," it also needs enough CONTIGUOUS MEMORY SPACE to hold the massive amounts of data associated with true "supercomputing." So far, no cluster has met that requirement.
    • There are certain classes of problems that clusters don't map to well. Applications with a very high cost of inter-processor comminucation or that demand a huge piece of contiguous memory are probably always going to be outside the realm of clusters.

      However, problems that are embaressingly parallel can be handled by a cluster very adequately for a fraction of the cost of a traditional supercomputer. I don't know that you can ignore this class of problems and say that clusters aren't "true 'supercomputi

    • To be considered a "supercomputer," it also needs enough CONTIGUOUS MEMORY SPACE to hold the massive amounts of data associated with true "supercomputing."


      Well, that's one way of seeing it I guess. A way not shared by most people in supercomputing, I might add.
    • Huh? Contiguous in what sense? Attached to the same motherboard? In one DIMM? Addressable in one chunk by the OS?

      I've only been to one supercomputing conference, but when I was there most all of the people there ran clusters and the top500 [top500.org] site (although this list is produced by the same supercomputing conference people) lists many clusters there.

      In other words, where does this contiguous memory requirement come from?
    • by Rhys ( 96510 ) on Monday September 13, 2004 @04:42PM (#10240247)
      Sad to see this little knowledge about parallel computing on slashdot: blatantly wrong information marked as informative. +5 no less.

      Let's address this first: there are two common memory architectures, distributed memory (a cluster) and shared memory (a 'traditional' supercomputer). Each can emulate the other. Saying a cluster doesn't have enough memory, presumably at each node, is really saying: "I don't really understand message passing."

      This would be more important if datasets were actually large. Unfortunatly for your argument they aren't. A handfull of nodes and they'll hold the whole simulation easily in memory (albeit it'd take years to run because there's so few CPUs at work.)

      How would I know? Well, I work with the Center for Simulation of Advanced Rockets aka CSAR at UIUC, one of five DoE ACSI sites in the country. I manage their supercomputer, which is getting upgraded from 200 P3-class dual proc PCs to 640 dual proc Xserve G5s. Before that I was a grad student working with them, albeit not on the CSAR simulation but instead on a related grant, the CPSD.

      Now, there are computing problems which clusters aren't good at (or at least that's the traditional claim. My master's thesis and advisor would seem to dispute that this is actually the case.) However, most problems as the interview says, run just fine on clusters. Physical simulations (which covers CSAR's rockets to the national labs nuclear weapon research to hurricane/weather simulation, all the way down to protein folding and atomic and sub-atomic scale crystal formation simulation) need to know about what's in the area you're working on, and what's in nearby areas.

      Occasionally you'll find an oddball like galactic simulation (or molecular dynamics) that needs to compute gravity across the whole universe. Fortunatly we have multigrid methods and a friendly gravity equation to solve this problem: get real data from those near you. Average those far from you and use that instead.

      Then of course there's the idea that even "traditional" supercomputer problems that don't run well on clusters can be run efficiently on clusters IF you move beyond 1 process per CPU. Load up 10, 20, 100, 1000 little workers on a processor. Get fast context switching between them (not OS level!). Use message passing rather than shared memory (locking, ick!) to communicate. One worker blocked waiting for network data? Process the next one! If you've tuned things right you'll find you always have work to do.

      Sounds crazy? Supercomputing '02 didn't think so: http://charm.cs.uiuc.edu/research/moldyn/
  • That is what I want to know!

    In all seriousness though, what is the ratio of cluster to big iron in supercomputing nowadays? I know a clusters can scale out to a lot of FLOPS, but what is the highest FLOPS processor available?
  • "Imagine a Beowulf cluster..." jokes are passe. The really hip respond to mentions of Donald Becker's name with oblique references to Steely Dan [wikipedia.org] records.
  • Imagine.... (Score:5, Funny)

    by drkich ( 305460 ) <dkichline AT gmail DOT com> on Monday September 13, 2004 @03:18PM (#10239313) Homepage
    Imagine there's no cluster,
    It's easy if you try,
    No adapter below us,
    Above us only loopback,
    Imagine all the computers
    computing for themselves...

    Imagine there's no internet,
    It isnt hard to do,
    Nothing to download or upload for,
    No porn too,
    Imagine all the computers
    computing pi in peace...

    Imagine no tokens,
    I wonder if you can,
    No need for ethernet or tcpip,
    A brotherhood of computer,
    Imagine all the computers
    Sharing nothing at all...

    You may say I'm a dreamer,
    but I'm the only one,
    I hope some day you'll leave us,
    And the computers will computer alone.
  • Cluster Schedulers (Score:2, Informative)

    by Anonymous Coward
    And GridEngine is free and opensource:

    http://gridengine.sunsource.net/

  • by jaylee7877 ( 665673 ) on Monday September 13, 2004 @03:29PM (#10239421) Homepage
    Donald Becker also has done a large amount of work on Linux Network drivers. Grep through linux/drivers/networking and you'll find he's done work on Intel NICs, Realtek 8139s, even the ne2000 (I think he said he puked a few time while working on that one). Thanks for all your hard work Donald!
  • A question for beowulf-savvy folks:

    At the end of the article, the comment is made that one reason for setting up a cluster is ease of management (for updates, applications, etc.). Can anyone with experience comment on whether this is true or not, with the way clustering exists today? I have no experience at all with cluster, and I'm wondering if this is something I should look into to ease administrative burdens?

  • Imagine all the pathetic individual computers you'd have if you take apart a bewoulf cluster.
    I think they did this in soviet russia.

  • openMosix (Score:3, Informative)

    by 241comp ( 535228 ) on Monday September 13, 2004 @03:55PM (#10239677) Homepage
    Beowulf isn't the only game in town folks. A much easier to maintain and balance cluster can be built using openMosix [openmosix.org]. openMosix is a single-image-cluster extension for Linux.
    • OpenMosix is quite cool, I tried it once and it really impressed me. Although there are no paches for 2.6, does anyone know where the development of this great project is heading?
      • If you're interested in the future development of oM then check out http://openmosix.sourceforge.net/plan.html [sourceforge.net]. This shows that 2.6 patches are planned within the next 6-12 months. oMFS is coming out and oGFS, Lustre and PVFS are replacing it. SHM support is to be stabilized. Usability will really make leaps and bounds in the next year if this plan works out.
    • Don't forget OpenSSI [openssi.org] as well. This is a single-system-image clustering product with a long lifespan and great support within HP.
  • He has been such a huge contributer to the Linux platform.

    Thanks,
    Eric
  • Why Beowulf? (Score:3, Insightful)

    by Trogre ( 513942 ) on Monday September 13, 2004 @04:24PM (#10240022) Homepage
    If you maintain a group of networked but otherwise independent computers for example a student lab or office farm, consider deploying something like PVM or MPI. It's a great way to get some use out of those idle cycles.

    PVM at least scales incredibly well: 25 machines rendering a povray scene take just a fraction over 1/25 the time taken to render it on one machine. I haven't tested MPI yet.

    • Raytracing is one of those applications that are incredibly trivial to parallelize. I wouldn't dare hope that your average application will have performance close to that.
    • ...consider deploying something like PVM or MPI.

      Back in the mid-90s we were using PVM (on Solaris boxes) in sequence (DNA/protein) similarity search applications, among other things. It scaled very nicely, provided the target sequence database was distributed across the network also. Very easy to implement and not too difficult to administer, either.

    • Well, duh...

      Most clusters are used to run MPI applications.

      There's no hard line between "cluster" and "networked independent computers". If you wan't to make some distinction, it could be that cluster nodes are pretty homogenous, and the cluster has a dedicated network instead of just using the office LAN.
  • clusterknoppix (Score:2, Informative)

    ClusterKnoppix [bofh.be] is pretty cool. Its got all the auto detect hardware features of regular knoppix, and also aautomatically adds itself to the cluster.
  • Why use beowulf? (Score:2, Interesting)

    by axehind ( 518047 )
    Why use Beowulf when you have openMosix? openMosix is all transparent to your application. You dont have to worry about remote execution, openmosix migrates your process automatically to the best node. Like I said.. it's all transparent and requires no additional programing in your application.
  • Rocks (Score:4, Interesting)

    by kst ( 168867 ) on Monday September 13, 2004 @07:31PM (#10241941)
    Rocks provides an easy way to build a Beowulf cluster. See http://www.rocksclusters.org/ [rocksclusters.org].

    You can build a working cluster, starting with the hardware and installation CD-ROMs, in minutes; see http://servers.linux.com/servers/04/08/27/1943227. shtml?tid=29&tid=94 [linux.com] for one account.

    Disclaimer: I work with the folks who created Rocks.
  • Because it looks like NONE of us remember Amoeba. Amoeba was a free* download, and could be used as an add-on to a *nix filesystem - I think, at the time, I was using Unixware7.0 or somesuch.

    As for me, I'd say that Donald's larger and more reaching contribution to *nix would be the network drivers that he wrote (3cX0Y cards, and the Tulip drivers come to mind).

    * "Free", here, as in beer definitely, and speech likely. I think that the source was provided, but cannot remember.

  • by scoobrs ( 779206 )
    I noticed that this article by the PenguinComputing CTO appears to answer the article by the Cray CTO and contradicts it. All I want to know is this: how much did PenguinComputing and Cray spend on advertising banners on SearchEnterpriseLinux to have these articles made? Let's let vapor settle before it gets to our heads.

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...