Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Clusters Finally Break the TeraFLOP barrier 223

cworley submitted - several times - this well-linked submission about a slightly boring topic - fast computers. "Top500.org has just released its latest list of the world's fastest supercomputers (updated twice yearly). For the first time, Linux Beowulf clusters have joined the teraFLOP club, with six new clusters breaking the teraFLOP barrier. Two Linux clusters now rank in the Top 10: Lawrence Livermore's "MCR" (built by Linux NetworX ) ranks #5 achieving 5.694 teraFLOP/s, and Forecast Systems Laboratory's "Jet" (built by HPTi) ranks #8 reaching 3.337 TeraFLOP/s. Other Linux clusters surpassing the teraFLOP/s barrier include: LSU's "SuperMike" at #17 (from Atipa ), the University at Buffalo at #22 and Sandia National Lab at #32 (both from Dell ), an Itanium cluster for British Petroleum Houston at #42 (from HP ), and Argonne National Labs at #46 (from Linux NetworX ) reached just over the one teraFLOP/s mark with 361 processors. In the previous Top500 list compiled last June, the fastest Intel based Netfinity 1024 processor clusters from IBM were sub-teraFLOP/s and the University of Heidelberg's AMD based "HELICS" cluster (built by Megware ) held the top tux rank at #35 with 825 GFLOP/s."
This discussion has been archived. No new comments can be posted.

Linux Clusters Finally Break the TeraFLOP barrier

Comments Filter:
  • by hopbine ( 618442 ) on Sunday November 17, 2002 @04:48PM (#4692156)
    I have often wondered how long it takes to boot one of these things. In the HP-UX world I know how long it takes for a K class (sometimes more than 20 minutes). Superdomes are sometimes faster, but not by much.
  • Wow! (Score:4, Interesting)

    by miffo.swe ( 547642 ) <daniel.hedblom@nOSpaM.gmail.com> on Sunday November 17, 2002 @04:50PM (#4692165) Homepage Journal
    1 NEC Earth-Simulator 35860.00
    2 Hewlett-Packard 7727.00 Los Alamos

    The distance from the first to the second is pretty impressive. What on earth did NEC really do over there?

  • How many FLOPS (Score:2, Interesting)

    by binary tr011 ( 621012 ) on Sunday November 17, 2002 @04:50PM (#4692170)
    Is there a way to tell how many FLOPS my linux machine gets. I always wondered.
  • Comment removed (Score:2, Interesting)

    by account_deleted ( 4530225 ) on Sunday November 17, 2002 @04:52PM (#4692183)
    Comment removed based on user account deletion
  • by caluml ( 551744 ) <slashdot@@@spamgoeshere...calum...org> on Sunday November 17, 2002 @04:53PM (#4692186) Homepage
    I built a small Beowulf cluster. It was actually very easy, apart from writing the MPI enabled code.

    Step 1: Install the lam packages on all the nodes
    Step 2: Create an account on all nodes, and use a passphrase-less ssh key to avoid prompting.
    Step 3: Compile your code with mpicc (rather than gcc)
    Step 4: Copy to all nodes.
    Step 5: mpirun C ./your-prog

    Admittedly it was only a 4 node cluster, but hey ;)

    Please, someone break it to me gently if this wasn't actually a Beowulf cluster ;))
  • Re:FLOPs (Score:5, Interesting)

    by jelle ( 14827 ) on Sunday November 17, 2002 @05:17PM (#4692331) Homepage
    Since nobody is answering your question: The Top500 supercomputers are ranked [netlib.org] by the results [top500.org] of the LinPack [netlib.org] benchmark.
  • LinuxBIOS (Score:5, Interesting)

    by bstadil ( 7110 ) on Sunday November 17, 2002 @05:20PM (#4692346) Homepage
    This is not such a dumb question. The LinuxBIOS project [lanl.gov] was started by and for the Los Alamos National Lab [lanl.gov]. One of the nifty things this allows them to do is change Kernel without taking the machines down. You can then switch to a kernel compiled for different purposes.
  • by That_Dan_Guy ( 589967 ) on Sunday November 17, 2002 @05:30PM (#4692400)
    I have some students from Boeing that are just in love with Linux. The Engineer department there just set up a Linux Cluster with 120 nodes for around 100,000 US dollars. They were running tests on it and found it was much faster than the Cray they had previously been using to do the same things.

    The main comment that struck me was how easy it was to set up. The Engineer IT department is mostly Unix (they're all in retraining becuase they are dumping Sun Stations for Intel based systems running XP beleive it or not- becuase Intel chips are so much faster and machines running XP are much cheaper than Sun Sparcs (plus the software they want runs on XP)) so it was of course easy to set up for them.

    Next they'll be setting up another LINUX cluster with maxed out dual or quad processor machines with more RAM. They're really excited.
  • Re:EARTH-SIMULATOR (Score:3, Interesting)

    by asavage ( 548758 ) on Sunday November 17, 2002 @05:54PM (#4692528)
    What is interesting about the earth simulator is for the first time the world's fastest supercomputer is located outside the USA. Short CNN article here [cnn.com]
  • by dsfd ( 622555 ) on Sunday November 17, 2002 @05:55PM (#4692535)
    We built and mantain a Beowulf with about 70 nodes. We use Debian GNU/Linux.

    I agree with you, in principle, it *is* easy to do but the problems increase with the number of nodes. IMHO, the main problems are:

    -Administration effort per node has to be almost zero. Beyond a number of nodes you definitely need things like fully automatic instalation, automatic power control, automatic diagnostic tools, a batch system, etc. All these tools already exist but you need some know-how to put all them together.

    -You need a large enough room with a cooling system that gives at least 100 W per node, 7kW in our case. Room temperature has to be about 20oC.

    -Low cost PC hardware is not allways reliable enough for this application. If you have codes that run 24x7 for months in a large number of processors, the probability to have a hardware problem is very high.

    We have found that our hardware suppliers do not carry out extensive tests on the systems they sell. This is because "normal" users run low quality OSs and they assume that it is normal that the computers just hang from time to time. Therefore, they do not allways detect failures in critical components such as RAM.

    -Of course, your application has to be suitable for parallel computing, specially if your cluster uses a low cost 100Mb/s network. In this case, compared to a "conventional" parallel computer (eg Cray T3E), the processors are roughly equivalent but the network is about 10 times slower and is easily the bottleneck of the system.

    Having said that, despite all the problems, I love Beowulfs. They have totally changed high performance computing, and they are definitely here to stay.

    All this has been possible thanks to free software, so thanks Mr. Stallman/Torvalds and many others...
  • CNN and MS Bias (Score:2, Interesting)

    by Anonymous Coward on Sunday November 17, 2002 @06:01PM (#4692555)
    I noticed that the CNN artical, that I read befor my daily scan of /. , did not bother to mention that the clusters were using Linux. If there is somthing that non-MS software can do that MS garbage can't do well, you can bet that main stream news will not report it, even if it is relevent.

    As a side note, I find it rather funny that aside from technical issues, one can not legaly cluster Macrohard systems because EULAs gets in the way!

You have a message from the operator.

Working...