Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Business Silicon Graphics United States

SGI & NASA Plan 10240-Processor Altix Cluster 202

green pizza writes "NASA has announced plans to cluster twenty 512-processor Silicon Graphics Inc Altix supercomputers connected to a 500-terabyte SGI InfiniteStorage SAN. The Altix uses Itanium2 CPUs running Linux atop an Origin 3000-derrived architecture. NASA and SGI scaled Linux to 512 CPUs late last year. There are also strong hints that SGI plans to bring its clustered ATI graphics to Altix in the near future. Lots of neat big iron project on the horizon!"
This discussion has been archived. No new comments can be posted.

SGI & NASA Plan 10240-Processor Altix Cluster

Comments Filter:
  • by notbob ( 73229 ) on Wednesday July 28, 2004 @10:21AM (#9822113)
    What would you do with 10k processors hooked up to 500 terabytes? Sounds like you could replace every machine Nasa has with an account on this thing.

    Sounds quite insane, I'd love to see the practical reasons for this.
    • by turgid ( 580780 ) on Wednesday July 28, 2004 @10:23AM (#9822157) Journal
      Sounds quite insane, I'd love to see the practical reasons for this.

      With the heat given off by all those itanics, I'm sure they could do some pretty good real-word research into heat shield materials and rocket engine nozzles.

    • by MoonFog ( 586818 ) on Wednesday July 28, 2004 @10:25AM (#9822174)
      NASA has picked computer maker Silicon Graphics Inc. and chipmaker Intel to develop a major supercomputer based on Linux to simulate space exploration and conduct other research, SGI announced Tuesday.

      Read it here [com.com]
      • by Timesprout ( 579035 ) on Wednesday July 28, 2004 @10:41AM (#9822351)
        For the conspiracy theorists who believe the moon landings were faked a collaboration between NASA and SGI on simulated exploration will just provide a basis to think that the Mars missions will not only be faked, they wont even use real actors next and the whole mission will be CGI.

        So when we do 'land' on Mars, if the astonauts burst into a song and dance extravaganza during the planting the flag ceremony then the job was probably outsourced to India. If the ceremony involves a 10 hour trek up a mountain and is interrupted by hordes of attacking Martians that must be defeated by the 6 astronauts then they probably got Peter Jackson to do it. If the whole mission is really lame and not quite what you were hoping for, look no futher than George Lucas.
    • Calculating things very quickly maybe? Just a thought. 500TB? I've never seen the need for more than 640k.

    • For an aerospace org, a cluster of this type would be used primarily for aerodynamical analysis work.
    • by Anonymous Coward
      Put windows me on it and see how many times you can open and close Photoshop before you run out of memory.
    • by RPI Geek ( 640282 ) on Wednesday July 28, 2004 @10:38AM (#9822318) Journal
      RTFA:
      By boosting its computing capacity ten-fold through Project Columbia, the NASA Advanced Supercomputing Facility (NAS) will be able to more effectively handle such critical projects as simulating future space missions, projecting the impact of human activity on weather patterns, and designing safe and efficient space exploration vehicles and aircraft. The present collaboration builds upon the highly successful 8-year partnership that last year developed the world's first 512-processor Linux server - based on standard, "off-the-shelf" microprocessor and open source technology - the SGI Altix at NASA Ames Research Center named 'Kalpana' after Columbia astronaut and Ames alumna Kalpana Chawla.

      Modeling and building on a business relationship.
    • What would you do with 10k processors hooked up to 500 terabytes?

      It can do realtime hologram movies, and pop the popcorn while you're watching it.

    • Real-time Computational Fluid Dynamics for an entire spacecraft including modeling the interactions of the layers and behavior at boundaries. Or maybe they can actually balance the NASA checkbook now! :)
    • ...well.... (Score:5, Interesting)

      by mhore ( 582354 ) on Wednesday July 28, 2004 @01:24PM (#9823350)
      I don't know what NASA would do with this, but I know what our group would do with it.

      We always need machines. You could give me 1024 machines and I'd still need more.

      For example, I study fluids currently. I may simulate 4,000,000 particles and it may take 3 weeks for my simulation to finish. If I had 10240 nodes, it may only take a day. Or perhaps I could simulate MORE particles for longer. There are all sorts of advantages to having this many machines hooked up.

      One thing I can tell you for sure is that there most likely will not be *1* job that uses all of these at once. There are probably several researchers that are using it simultaneously and have a slice of the machines. Press releases like this are often time misleading because usually the CPUs are split between several jobs and researchers and research groups and what not.

      Not to steal NASA's thunder -- a cluster this big is impressive.

      Mike.
    • It is to create realistic movies of men walking on Mars. There were many complaints about the Moon movies, so they have to do something about that. on the whole, the super computer is much cheaper than actually going to Mars, so it is money well spent and accords with the latest budget reductions...
  • by turgid ( 580780 ) on Wednesday July 28, 2004 @10:21AM (#9822116) Journal
    This is great news for intel. They will double the number of itanics shipped in a single deal!
    • by Agent Green ( 231202 ) * on Wednesday July 28, 2004 @10:24AM (#9822162)
      Good news for Intel indeed, but wouldn't the same deployment with AMD Opterons been cheaper AND faster??
      • by musikit ( 716987 ) on Wednesday July 28, 2004 @10:25AM (#9822171)
        aye but they most likely would have spent the saved money on air conditioning.
      • by Wesley Felter ( 138342 ) <wesley@felter.org> on Wednesday July 28, 2004 @10:28AM (#9822211) Homepage
        Itanium has better floating-point performance than Opteron, although the price/performance is worse. There are no 512-way Opteron systems; maybe NASA likes to write shared-memory parallel applications.
        • by turgid ( 580780 ) on Wednesday July 28, 2004 @11:38AM (#9822720) Journal
          There are no 512-way Opteron systems; maybe NASA likes to write shared-memory parallel applications.

          Not yet, but Cray is working on it in something called Red Storm.

          Itanium's "better" floating point performance than Opteron is confined to some pretty specialised benchmarks. Over all, Opteron is a more efficient design, runs cooler than itanium, has better compilers, better software support, is cheaper and had more room to scale to much higher clock speeds.

          • Isn't Cray just a division of SGI now? It seems that they are moving more and more in the direction of the old Cray.
            • by turgid ( 580780 )
              Isn't Cray just a division of SGI now?

              No, SGI bought part of Cray a few years back and Sun bought another part (that's where the Sun E10k came from). SGI sold it's part of Cray to a company called Tera which then changed its name to Cray.

          • ...to hook those babies together in multiprocessor clumps that can exchange data amongst themselves really, really quickly.

            If Opterons don't win on raw FP performance (which in itself is debatable), they'd absolutely hammer (ha!) the Intel chips once IO and the cost of support chips was factored in.

            I'm betting Intel chips were chosen for (supplier-)political reasons.

      • by cnkeller ( 181482 ) <cnkeller@@@gmail...com> on Wednesday July 28, 2004 @10:31AM (#9822248) Homepage
        Good news for Intel indeed, but wouldn't the same deployment with AMD Opterons been cheaper AND faster??

        Well, until the final numbers come out, we aren't speculating on performance. Needless to say we hope to claim the top slot in computing power. Also, keep in mind that parts availability is a major concern. We are assembling the system to be fully up and running by SuperComputing '05 in November. Intel has fully committed to delivering all 10K CPU's with no problems. Also, perhaps the biggest reason for Intel, is SGI was chosen as the vendor and they use Intel.

      • by Shinobi ( 19308 ) on Wednesday July 28, 2004 @11:31AM (#9822701)
        Agent Green:

        Cheaper? Not likely, you'd have to buy the high-speed interconnect to make it worthwhile. And the Opterons perform fairly poorly in larger clusters, since they have the NUMA latency penalties locally on each node. Checking the Top500 list, a cluster of 256 Opteron 246 using Infiniband will perform worse than a cluster of 256 Xeon 2.8GHz using Infiniband. The scariest example is that a cluster of 256 P4's@3GHz using Gigabit Ethernet outperforms the Opteron cluster.....

        Important to note is that the Linpack test doesn't stress the interconnect that much. The more a task stresses the interconnect, the more the Opteron cluster will be penalized. There's one exception though, and that's the Cray Octiga Bay systems.... And if you go that route, it costs _at_ _least_ as much as an Altix system
      • SGI and Cray have been working on I2 systems for at least a year or two before Opteron was released. People dismiss Itanium2 but it does perform pretty well, much better than the original Itanium.

        I think Itanium has some features not available in Opteron. One I know for sure is available lock-stepping for extra fault tolerance, according to an AMD engineer I asked, AMD has no plans for it.
    • by Anonymous Coward
      This is great news for intel. They will double the number of itanics shipped in a single deal!

      yes, for sure. they bought a congressman to make this happen. (no joke, trust me.)

      and as usual, real science at nasa is going to suffer for a waste money on unneeded computing capacity just so the US can prove we have a bigger dick than the japanese.

      -pissed off nasa worker
    • With this cluster, Intel will have doubled
      the number of Itanium 2 sales for the YEAR!
  • Doom (Score:2, Funny)

    by Klar ( 522420 )
    Wonder if that thing could play Doom3?
  • by Anonymous Coward

    I really do wonder, why did SGI and IBM invest so much time and money on Linux, instead of e.g. NetBSD [netbsd.org]? I understand IBM is currently using Linux to push their middleware and J2EE stuff, but they could as well use a BSD and not even need to give stuff back to the community.

    Mike Bouma

    • by WindBourne ( 631190 ) on Wednesday July 28, 2004 @10:32AM (#9822260) Journal
      The reason? The License. While BSD License really is the most free, it would allow IBM to put a lot of effort into it, and then have MS swope in, modify it, and sell with a sorts of closed APIs, etc.

      In essence, the BSD license would allow the creation of another Unix model where the core is identical or just similar, but the APIs would be used to lock users in. How would that solve IBM's problem? Or for that matter any Hardware vendors problem? It would not.
      • The reason? The License. While BSD License really is the most free, it would allow IBM to put a lot of effort into it, and then have MS swope in, modify it, and sell with a sorts of closed APIs, etc.

        In essence, the BSD license would allow the creation of another Unix model where the core is identical or just similar, but the APIs would be used to lock users in. How would that solve IBM's problem? Or for that matter any Hardware vendors problem? It would not.


        Finally an answer that doesn't involve ranting
      • Not to mention the hassle of supporting multiple platforms. NASA (as well as IBM) currently has a lot of expertise with Linux not to mention the installed base. If there is no good reason (price? no, performance? no, security? I doubt this thing would be on a network that is publicly accessible, so no, etc..) why change?

      • While BSD License really is the most free, it would allow IBM to put a lot of effort into it, and then have MS swope in, modify it, and sell with a sorts of closed APIs, etc.

        No they wouldn't, because under the BSD license, IBM wouldn't have to publicly distribute their modifications to the NetBSD code.

        The real reason IBM go with Linux? They have more expertise with it than any of the BSD's.

        • Actually, when they made the decision, they had more coders inside who had far more experience with BSD.The reason was about the license, plain and simple.

          I could also tell you that I use to work at IBM and still have friends there who told me all that. But instead, I will point out that OSs are lose-leaders. When I worked at IBM, Uncle Lou took over. Just prior to that OS/2 was near to being OSS, but Lou stopped it. I think his rational was to have a weapon against MS. But that failed big and the OS was l

    • NetBSD is hardly the OS you'd want running on one of these machines. If you had to pick a BSD I suppose it would be FreeBSD. However, I think Linux scales further which would help explain why you see it on the big clusters instead of BSD.

      NetBSD for portability.
      OpenBSD for security.
      FreeBSD for well I'm not actually sure, I use Linux instead.
  • Can it run a JVM running on a Windows box and still be able to refresh the graphics?
  • by levram2 ( 701042 ) on Wednesday July 28, 2004 @10:26AM (#9822184)
    I'm guessing that NASA found out Doom 3 has a software renderer and are buying the minimum specs.
  • I miss irix (Score:1, Offtopic)

    by chegosaurus ( 98703 )
    Please mod me down.
  • by Ars-Fartsica ( 166957 ) on Wednesday July 28, 2004 @10:34AM (#9822287)
    I commend SGI for finding a way to survive in a brutal post-workstation, post-proprietary-unix world - for a bit there it looked like they were going to be a candidate for an office furniture auction....but the stock is about to enter the penny range. It will be hard for SGI to attract serious capital if they go sideways in a range under $1, and they will once again court delisting.

    Good luck SGI, the Valley is rooting for its former star, and so are a lot of stock speculators.

    • I'm not a market whiz by any means, but how does a low stock price (assuming other, positive indicators) influence whether a company can survive or not? Once the stock is sold by the company, they don't make any further money on its continued sale.

      A stock whose price continues to climb can allow the company to essentially print money by issuing new stock (if the price climbs fast enough existing shareholders don't generally notice or care that you're diluting the pool), but beyond that, how does share pri
  • by pixas ( 711468 ) on Wednesday July 28, 2004 @10:38AM (#9822326)
    So NASA is planning to upgrade to Longhorn then?
  • by SunPin ( 596554 ) <slashspam@@@cyberista...com> on Wednesday July 28, 2004 @10:40AM (#9822343) Homepage
    to fake a human settlement on Mars.
  • Homo Zapiens (Score:2, Interesting)

    by xenostar ( 746407 )
    In a wonderful book "Homo Zapiens" by Victor Pelevin, the leaders of the world are rendered on clusters of SGI machines by a secret organization. Makes you wonder when you hear about these clusters :)
  • Comment removed based on user account deletion
  • There are also strong hints that SGI plans to bring its clustered ATI graphics to Altix in the near future.

    I thought that SGI sold a lot of their graphics IP (including many of their top graphics engineers) to NVIDIA a while back, and still have agreements with them. Their IRIX systems sell with VPRO graphics cards, which I believe are repackaged NVIDIA chips with a few extras..

    Or did I miss something?

    d.
    • The MIPS/IRIX systems have VPro graphics, yes. But those are not from NVIDIA. VPro for MIPS/IRIX is the last chipset to be developed inhouse.

      The confusion comes from the fact that Sgi marketing thought it would be a good idea to give both the PC and Irix graphics cards the same brand: VPro.

      They currently don't have anything newer for their workstations, but their newest Onyx (visualization system) computers use a couple of Ati cards for their graphics. It's called the UltimateVision [sgi.com].

    • by lweinmunson ( 91267 ) on Wednesday July 28, 2004 @01:08PM (#9823174)
      thought that SGI sold a lot of their graphics IP (including many of their top graphics engineers) to NVIDIA a while back, and still have agreements with them. Their IRIX systems sell with VPRO graphics cards, which I believe are repackaged NVIDIA chips with a few extras..

      Or did I miss something?


      Yes, The Vpro series only resembles Nvidia chipes because after it was completed, most of the team went to work for Nvidia and created the geforce with lots of the same ideas behind it. So the original GeForce chips were more like cut down Vpro's than the VPro's were soupped up GeForces if that makes any sense.
  • Im here in San Jose at the NYLF conference. The head of the NASA AIMS center talked about that yesterday. It was pretty impressive. All the stuff there doing is pretty impressive. Within the next week I am going there so I may be able to see it. Maybe even setup an SSH server hehehe.
  • by DeathPenguin ( 449875 ) * on Wednesday July 28, 2004 @10:48AM (#9822439)
    Just curious. My guess is that Intel keeps pumping money into SGI to get Altix systems out and those who have them (LLNL and ...?) got them at practically no charge to run Linpack and look good on the Top500 [top500.org] list.
    • I doubt intel is pumping a lot of cash into sgi, but they may have cut them a real deal on the chips. When I first read about this computer I thought "what a coup for sgi." Then I read the dollar mark and thought "what a coup for nasa." $45million including storage and fibre channel? That's less than $2million per on those 512 proc altix boxes. They're not making much margin on those.

      To counter all of their detractors, Itanium2's are pretty hot processors, and SGI has done an amazing job getting linux to r
      • Ouch. I guess it is getting continually tougher to justify the single-system supercomputer approach. Someone mentioned that linpack doesn't emphasize interconnect performance enough. I think it is worth noting that the I2 processors are a lot cheaper now then they were a year ago, 1.4 GHz I2s with 3MB cache are selling for $1400 on pricewatch, that's a pretty good price for big-iron type CPUs. The PR doesn't say what speed and L3 size the CPUs use though.
  • ...is that by current progress, in 10 years you'll be able to get a consumer desktop with this much power.

    Still.. just imagine how much SETI@Home you could do on a beo.. err, on one of those!
    • This system should be 1,000 to 10,000 times faster than your fastest desk top. The real trend the last 40 years or so has been about a 10x increase every 10 years on the desktop. Sure my desktop is faster than a Cray from the 70s, but not faster than any top contender from the 90s.

      We have had a huge bump-up in GFLOPs for supercomputers this last decade. In 1993 the top system was about 60 Gigaflops vs about 40 Teraflops today (see top500.org) while a top of the line pentium 4 today is at about 5 Gigafl

      • I agree. Massively multi-core may not happen for a while yet. There would have to be some breakthrus in photolithography and electrical engineering that allows the production of much smaller gates, so each CPU takes up much less space (say 10X less) allowing massive numbers of CPUs on a reasonable size die. IIRC, Sun now has placed 4 Sparc CPUs on a single chip. And you can get two Xeons on 1 chip now.
    • ...three-phase plug you'll need to install before powering the damn thing up, and that - what with the cooling fans and all - not even a Canadian owning one of these would ever see snow in his yard.
  • It whirs and clicks and sputters..

    Finally, the following cryptic message mysteriously appears on it's console:

    42

  • by geomon ( 78680 ) on Wednesday July 28, 2004 @10:56AM (#9822518) Homepage Journal
    I'm glad to see that SGI has regained its legs and is back in the high-end computing market again. The gamble they made in embracing Linux has paid off. Other folks had counted them dead because they came to the WinNT game late and were, therefore, fated to be high-priced integrators. Their days were numbered by the low-end market forces like Dell and HP.

    Now we see that there is a market for high-priced integrators as long as the underlying technology fits the market segment you target.

    • by arth1 ( 260657 ) on Wednesday July 28, 2004 @01:20PM (#9823304) Homepage Journal
      I'm glad to see that SGI has regained its legs and is back in the high-end computing market again. The gamble they made in embracing Linux has paid off.

      What few people seem to know (and appreciate) is that SGI has been one of the major contributors to Linux over the last few years. Not only XFS, but lots of commands, utilities and system functions have been enhanced, based on IRIX code. This has been a significant boost to Linux, and it's only fair that SGI reaps some benefits.
      I wish SGI and its employees the best of luck!

      Regards,
      --
      *Art
      • What few people seem to know (and appreciate) is that SGI has been one of the major contributors to Linux over the last few years...

        It wasn't overlooked by SCO. They were looking seriously at SGI as lawsuit meat. If there is anything left of SCO after Baystar Capital has finished suing them, maybe they will add SGI to their "Most Wanted List".

        Which should (hopefully) have the same impact as the Daimler-Chrysler suit.

  • by Saeed al-Sahaf ( 665390 ) on Wednesday July 28, 2004 @10:57AM (#9822529) Homepage
    Doesn't NASA know that Linux is national security threat [slashdot.org]??? And 10240 cpu cluster no less? Don't they know that such concentrated evil will create a singularity? This could be the end of our civilization.
  • Finally someone can beta test the new longhorn minimal configuration. See benchmark test results up at www.nasa.gov/longhorn.html
  • by cdc179 ( 561916 ) on Wednesday July 28, 2004 @11:30AM (#9822698)
    I have always liked SGI hardware. And congradulations are in order for them to have a single Linux kernel running across 512 CPUs.

    In SGIs press release they state that they hope to get the top spot on the 500 list. As all know IBM is expecting Blue Gene http://www.research.ibm.com/bluegene/ to take the top spot in 2005.

    It looks like SGIs architecture for the Altix is better than the Blue Gene, but 10,240 intel CPUs is just going to be outpowered by the 65,536 PowerPC CPUs in Blue Gene.

    Now the ultimate machine would have SGIs architecute (memory) and #CPUs per node using the PowerPC CPU. We know that IBM and SGI would never colaborate on something like this, but can't a geek dream!

    More blue gene specs: http://sc-2002.org/paperpdfs/pap.pap207.pdf
    • Now the ultimate machine would have SGIs architecute (memory) and #CPUs per node using the PowerPC CPU

      If we're going for ultimate, I'd rather use the Power line of chips than the PowerPC. The PowerPC is a good chip, but the Power line is an entirely new level. : )

      steve
  • ... was reading too much Slashdot.

    I knew nothing good could come of all those beowulf cluster ideas!
    • You do realize that NASA came up with the Beowulf cluster, right? Primarily the work of Donald Becker; the same guy that wrote a lot of the Linux network card drivers.
  • I mean, it's an operating system like any other right? And no, it isn't for the canned reasons like instability, insecurity or other "Windows is teh sux0r" gibberings.

    Fact is, these research and hardware people don't have to negotiate a license with anybody, don't have to wait for the proper "10240 Processor Edition Platinum Plus Edition XP" of the OS to be finished by the developer, because Linux, in its free nature, allows them to add all the necessary capabilities (and remove unnecessary ones) themselve
  • I've seen tons of stories about using Linux machines to create insanely large clusters and creating supercomputers and all that. I don't pretend to understand how any of that works but it fascinates the hell out of me.

    What is it about Linux that makes it more possible than MS Windows?

    It's a general tech question I'm hoping someone can answer without getting all religious about it.
    • Linux has, for quite some time, had a lot of effort put into support for machines other than x86 - which implies 64-bitness, more processers, NUMA, and all of those goodies.

      Windows, on the other hand, has (with a few exceptions, now gone) been tied to the x86 architecture - meaning there's never even been a need to support more than a relatively small number of processers, and not even the possibility (until recently) of 64-bitness. (Yes, there was NT for the Alpha. It's gone.)

      With Linux, there are peop
      • Windows NT is internally 64-bit. The Windows APIs on x86 are downported to 32-bit.

        (The internal interfaces are all 64-bit, but that is not the same as the Windows APIs that are 32-bit.)
  • Artificial Development [ad.com] is developing a simulation of the human cortex. They could use this thing. They need all the processing power they can get their hands on.
  • by lewp ( 95638 )
    *starts checking Fatwallet for hot deals on new supercomputer*
  • for the next version of Longhorn and MS Office 2008, that's all...
  • I think we just slashdotted Silicon Graphics.
  • Remember when the SGI 512 processor story came out and all those "imagine a beowulf cluster of these" jokes were modded down?

    Now we have to go back and mod them insightful

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...