Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Submission + - Best Use For A New SuperComputer (HPC) 3

Supp0rtLinux writes: In about 2 weeks time I will be receiving everything necessary to build out the largest x86_64-based supercomputer on the east coast of the US (at least until someone takes the title away from us). Its spec'd to start with 1200 servers with dual socket, six core configs. We primarily do life-science/health/bio related tasks on our existing (and fairly small) HPC. We intend to continue this usage, but to also open it up for new uses (energy comes to mind). Additionally, we'd like to lease out access to recoup some of our costs. So what's the best Linux distro for something of this size and scale? Any that include a chargeback option/module built-in? Additionally, due to cost, we have to choose either IB or 10GbE for the backend, we cannot have both. Either way, all nodes will have 4 x 1Gbps ports available. Would Slashdot readers go with IB or 10GbE if they had to choose? And last, all nodes include only a basic onboard GPU. We intend to put powerful GPU's onto the PCI-e slot and open up the new HPC for GPU related crunching. Any suggestions on the most power, Linux-driver friendly, PCI-e based GPU available?
This discussion was created for logged-in users only, but now has been archived. No new comments can be posted.

Best Use For A New SuperComputer (HPC)

Comments Filter:
  • So, you're receiving between $10M and $50M in hardware, plus ongoing funding for leased floorspace, power, cooling, and operations support. YOU DON'T KNOW WHAT YOU'RE USING IT FOR?

    You're either a troll, or you're fired.

  • You're about to receive a large amount of hardware from the vendor, and you haven't decided upon which GPU's to use, which interconnect for communications, what OS would be appropriate, or the types of workloads your users will be running (beyond your base set)? Really? If that's the case, no amount information from slashdot will solve your problems.

    If you have no interconnect chosen, how will you rack the systems in the case that cable lengths are an issue, as they are for IB? Do you even have nodes that n

10.0 times 0.1 is hardly ever 1.0.