Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Debian Software Linux

Debian Cluster Replaces Supercomputer For Weather Forecasting 160

wazza brings us a story about the Philippine government's weather service (PAGASA), which has recently used an eight-PC Debian cluster to replace an SGI supercomputer. The system processes data from local sources and the Global Telecommunication System, and it has reduced monthly operational costs by a factor of 20. Quoting: "'We tried several Linux flavours, including Red Hat, Mandrake, Fedora etc,' said Alan Pineda, head of ICT and flood forecasting at PAGASA. 'It doesn't make a dent in our budget; it's very negligible.' Pineda said PAGASA also wanted to implement a system which is very scalable. All of the equipment used for PICWIN's data gathering comes off-the-shelf, including laptops and mobile phones to transmit weather data such as temperature, humidity, rainfall, cloud formation and atmospheric pressure from field stations via SMS into PAGASA's central database."
This discussion has been archived. No new comments can be posted.

Debian Cluster Replaces Supercomputer For Weather Forecasting

Comments Filter:
  • by pembo13 ( 770295 ) on Friday March 14, 2008 @02:32AM (#22748554) Homepage
    How different can Debian really be compared to RedHat in terms of stability? They both use the Linux kernel, and GNU tools, and follow the LSB, no?
  • Re:Debian? (Score:4, Interesting)

    by rucs_hack ( 784150 ) on Friday March 14, 2008 @02:53AM (#22748612)
    Actually I don't like Debian much as a desktop machine, but I love it as a number cruncher OS. I've had a 10 machine openmosix cluster going for several years now, problem free.

    Stability is a major thing with Debian, and my experience has been that this is quite true.
  • by dhavleak ( 912889 ) on Friday March 14, 2008 @02:55AM (#22748628)

    What was the age and the specs of the SGI being replaced?

    Going by Moore's law, a factor of 20 performance improvement takes about 6 to 8 years. If the SGI was at least that old, this isn't news -- it's just the state of the art these days. In other words, small clusters capable of weather forcasting are relatively run-of-the-mill.

    Of course, props to linux for being the enabler in this case.

  • by toby34a ( 944439 ) on Friday March 14, 2008 @03:03AM (#22748646)
    Most weather prediction centers have adapted their weather forecast models to use Linux clusters. By running an operational forecast model on a cluster, it's easy for forecasters to scale the models so that they can be run (albeit slowly) on desktop machines, and are easily worked on by real meteorologists (versus IT professionals). At my university, we use a large cluster of machines on a RedHat enterprise system, and then able to scale the models and run them on multiple processors using MPICH compilers and batch jobs. Really, using a Debian cluster is no different then using a RedHat cluster. My colleague has access to the NOAA machine, which has more processors then you can shake a stick at... he talks about some code that takes 3 days to run on his personal workstation that takes 2 minutes on 40 processors. With the relatively low cost of a linux cluster, weather forecasting models can be run quickly and efficiently on numerous processors at a local level. With the ease of use of a Linux machine versus some of the supercomputers, it puts the power in the meteorologists to make those changes to the model so that it can improve forecasts.
  • by terremoto ( 679350 ) on Friday March 14, 2008 @05:01AM (#22749038)
    From TFA
    What was even surprising to us is that Intel FORTRAN is also free of charge ...
    I bet Intel are surprised too. Their compilers are not that free of charge. The people at the Philippine government's official weather service are hardly "not getting compensated in any form" http://www.intel.com/cd/software/products/asmo-na/eng/219771.htm [intel.com]
  • Re:Debian? (Score:5, Interesting)

    by Simon Brooke ( 45012 ) <stillyet@googlemail.com> on Friday March 14, 2008 @05:16AM (#22749096) Homepage Journal

    Why Debian? A desktop distro? That's got to be one of the least scalable and cluster-friendly distros. If they would invest a little to set things up properly they could get a lot more performance out of their machines.

    Debian isn't - and never has been - a desktop distro. If you want a desktop distro built on Debian architecture, you get Ubuntu, or Knoppix, or one of a dozen others. Debian's unique selling proposition is a combination of stability, which is very important to production servers, and a rigorous commitment to free software. Packages don't make it into Debian Stable until they have been thoroughly tested. Debian also has the best package management system in the industry.

    Frankly, I wouldn't run a server with anything else.

  • by Anonymous Coward on Friday March 14, 2008 @07:32AM (#22749486)
    The data is submitted by the owners, but it's not like anyone has any reason to lie about what they're running on their cluster.

    SUSE and RedHat EL are the two Linux distributions you'd expect to see on a Top500 cluster. They administrators can be sure of good support from Novell or RedHat and there's little advantage in using anything else. Needless to say, 8 computers is not a Top500 cluster. I have test clusters that are powered off right now that have more than 8 nodes.
  • by Respect_my_Authority ( 967217 ) on Friday March 14, 2008 @10:02AM (#22750532)

    > Debian "testing" is upgraded several times a month.

    Wrong. Debian testing is updated automatically from packages in Debian unstable. The difference is simply that a package has to sit in Debian unstable for a few days, and no significant bugs can be introduced by the new package, before it is updated. Since the process is automatic, Debian testing is updated just slightly less continuously as unstable (it depends on the robot to check the package dependencies and bug reports rather than the maintainer to upload a new version).

    There are robots among debian developers, eh? ;-)

    I don't think the process of migrating packages from unstable to testing is quite as automatic as you describe. At least, the most important packages (like linux, gcc, glibc, dpkg, python, xorg, gnome, kde, ...) don't migrate automatically. These transitions are made only when the maintainers think they're ready to be included in the next stable debian release and when they're sure that they don't break anything.

    > Debian "stable" is upgraded every one or two years.

    It usually takes slightly longer than two years.

    Yes, but haven't you noticed a definite change with the release timetables for etch and lenny? Etch was originally planned to be released 18 months after sarge, although it actually took 22 months because there was a four-month long delay. And now lenny is planned to be released in September 2008, which would be 17 months after the etch release. To me this seems like debian has lately adopted the goal of a 18-month long release cycle, although slight alterations are always possible because debian stable is only released "when it's ready."

Doubt isn't the opposite of faith; it is an element of faith. - Paul Tillich, German theologian and historian

Working...