Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Clusters for sale 32

Fred M writes "For use in high-computing area's Siemens has build their hpcLine systems and showed it on a meeting for customers in februari. Base of the systems are cpu modules, each consisting of 2 Dual-CPU-Boards. Each module has 2 Pentium-CPUs - then PIIs 450 MHz - and memory of 2048 MB max. 8 modules (= 32 cpu's) can be mounted in one rack. Connection of the modules is done with Scalable Coherent I nterface (SCI), that has a bandwidth of 500 MByte/s and uses a Ring-Topology. A story (in German) can be found here "
This discussion has been archived. No new comments can be posted.

Linux Clusters for sale

Comments Filter:
  • Any VME based linux clusters?


    I don't remember much about the VME bus (is it still used?), but would that be any better than using a 100MBit ethernet backbone? It's an industrial 32/64bit wide bus as far as I recall, right?


    Of course, the pricetag attached to such a system wouldn't compare very well to a normal cluster, but what about the speed increase?


    Anybody knows more about VME and if linux supports VME architecture?

    ---

  • Siemens developed the hpcLine specifically for use in High Performance Computing. The machines were presented at a customer presentation late in February. The system is based on a modular architecture featuring two dual-CPU boards. Each board can carry two Pentium-CPUs - currently PIIs with 450 MHz - and a maximum of 2048 MB RAM. Eight of these modules can be put into a rack, which is then a system with 32 CPUs.

    For communication between the nodes Siemens uses the Scalable Coherent Interface (SCI), which delivers a bandwidth of 500 MB/sec within a ring-topology. SCI cards are made by Dolphin. The communication software, the Message Passing Interface (MPI) has been developed by Scali.

    For Fortran, High-Performance-Fortran, C and C++ Portland Groups Compilers are available from Pallas.

    Systems of these type have already been deployed by the Paderborder Center for Parallel Computing and at RWTH Aachen.

    Siemens offers Windows-NT, Solaris x86 and Linux as operating system for these machines. The entry package, an 8 node system with 16 PII, 450 MHz and 512 MB RAM and 4.3 GB disk space per node comes at 130 000 Deutschmark (~75-80 Kilo-$).

    This continues the trend towards the professional use of Linux in Clusters.
  • At 22watts each the PIIs would consume 704w. HDs and memory would add another 1200w. Thats about ~2000w. Or 20 light bulbs.
    Now we're nothing.
  • We've seen that the g77 from egcs 1.0.3 generates code that's up to 50% slower than that generated by the Portland Group compilers's pgf77 (v3.0) on Linux/x86. My guess in that the difference between g77 on Linux/Alpha and the DEC Fortran compiler is at least as great and probably larger. The problem is, you can't legally use the DEC Fortran compiler under Linux/Alpha, and I don't think the Portland Group or Absoft have plans to support Linux/Alpha either. That leaves us with only g77, which generates fairly lousy code.

    (Responses to the effect of "Use C!" will be gleefully ignored. Believe it or not, most scientific programming is still done in Fortran.)

    --Troy
  • With IBM embracing Linux, I wonder if they will consider setting up an RS/6000 SP cluster running Linux.

    That wouldn't be too bad. The Power2 and Power3 processors are pretty fast (although their L2 cache sizes are kinda small compared with the Alpha 21264 and MIPS R10k/R12k), and the SP switch's performance is respectable. The memory bandwidth on the SMP SP nodes is kinda sucky, though; the memory's on a shared bus, like on an SMP Intel box. It's be nice if they'd put in a memory crossbar switch like Sun uses. That'd drive the price way up, though. Compilers would be a big problem, too (just as they are now on Alpha); gcc and g77 don't cut it for high performance code.

    We have an older SP-2 where I work, and we're trying get rid of it in favor of a comparably sized Beowulf. Almost everybody hates AIX and LoadLeveler (IBM's eeeevil batch system) with a passion, and the maintenance costs are murder.

    --Troy
  • With IBM embracing Linux, I wonder if they will consider setting up an RS/6000 SP cluster running Linux.

    "In true sound..." -Agents of Good Root
  • In that case one would have to look into Portland Group's high performace development tools.

    "In true sound..." -Agents of Good Root
  • http://babelfish.altavista.digital.com/cgi-bin/tra nslate?lp=de_en&urltext=http%3a%2f%2fwww %2eheise%2ede%2fnewsticker%2fdata%2frh-09.03.99-00 0>Bablefish

    Munges it a bit, but you can get the idea :-)
  • Everybody, please check www.microway.com [microway.com] before speaking of Yummy, etc. There is no such thing as a Yummy x86 based system.

    On the subject, Microway have been offering Beowulf Alpha clusters for quite a while. Actually they were even supposed to present one at the Linux expo, but it looks like nobody have noticed it. This is understandable because as you can see from the pictures their stuff does not look that shiny. But it works...

    And I bet that it can blast any x86 based box out of the sky...
  • Yep, cost is the issue -- WAS: Yummy!

    If they start charging >$5K for these things, the price just moves closer to Sun/SGI solutions.

    I hope the Corel rummor on a 2+8 cluster that fits in a 9U (or similar) rack form-factor is true.

    Why hasn't anyone come up with a low-profile form-factor cluster of of Socket370 or Slot1 CPU mainboards yet? The market is there!

  • by Ellis-D ( 19919 )
    I should of paid attention in german class.. But it sound interesting.. What happened to the first posts?!?!
  • I have ported the Scali SCI driver to Linux as part of my work at the University of Paderborn, our research group would like to see an open source driver, but it contains too much stuff from Scali and they do not allow to release the source.

    Currently the PC^2 (Paderborn Center for Parallel Computing, part of the university) operates two large SCI clusters, a small one with 64 PII 300 (32 dualprocessor nodes) and a large one with 192 Xeon 450 (96 dualprocessor nodes).

    For more info have a look at:

    http://www.uni-paderborn.de/cs/heiss/arminius/

Prediction is very difficult, especially of the future. - Niels Bohr

Working...