Forgot your password?
typodupeerror
Debian Software Linux

Debian Cluster Replaces Supercomputer For Weather Forecasting 160

Posted by Soulskill
from the when-it-rains-it-pours dept.
wazza brings us a story about the Philippine government's weather service (PAGASA), which has recently used an eight-PC Debian cluster to replace an SGI supercomputer. The system processes data from local sources and the Global Telecommunication System, and it has reduced monthly operational costs by a factor of 20. Quoting: "'We tried several Linux flavours, including Red Hat, Mandrake, Fedora etc,' said Alan Pineda, head of ICT and flood forecasting at PAGASA. 'It doesn't make a dent in our budget; it's very negligible.' Pineda said PAGASA also wanted to implement a system which is very scalable. All of the equipment used for PICWIN's data gathering comes off-the-shelf, including laptops and mobile phones to transmit weather data such as temperature, humidity, rainfall, cloud formation and atmospheric pressure from field stations via SMS into PAGASA's central database."
This discussion has been archived. No new comments can be posted.

Debian Cluster Replaces Supercomputer For Weather Forecasting

Comments Filter:
  • Re:Debian? (Score:5, Informative)

    by TheWanderingHermit (513872) on Friday March 14, 2008 @02:20AM (#22748510)
    Actually, Debian is intended for servers and runs on more architectures than any other distro. The whole reason for the long testing cycle on Debian is to make sure it's as stable as possible so it can sit on a server and need little or no attention for days, weeks, or even months at a time.

    I hated Debian at first because it wasn't friendly, but I looked into it more and realized it was the best choice I could make for my production servers. I can set them up and check once a week or so and they're still chugging along without need of intervention.

    I wouldn't use Debian on my desktop (I use Kubuntu), but it can't be beat for servers.

    It's NOT a desktop distro. Especially compared to Mandr* or Ubuntu or many others out there.
  • by TheWanderingHermit (513872) on Friday March 14, 2008 @02:25AM (#22748534)
    Sure. Add in paying for tech support or the cost in man-hours it takes to keep it running. Both can make a serious dent where nobody expected to see one.
  • by Anonymous Coward on Friday March 14, 2008 @02:35AM (#22748562)
    I'm sure that if they can manage an ancient SGI supercomputer, they can easily manage Debian. I've been using it since woody, and I must say, compared to many other distros, Debian is easy to manage. Not only that, its reliability is second to none on the Linux platform. I have a machine that's been running the same Debian install since the days of woody, all up-to-date with Etch. Not a single problem with it, runs a lot better than an XP desktop I have, which has needed 2 reinstalls in the past year, or Gentoo, which frequently breaks when packages fail to compile.
  • by elysium-os (998821) <marcel.koopmans@elysium-os.nl> on Friday March 14, 2008 @02:43AM (#22748584) Homepage
    Many distro's add kernel patches and add different drivers to the initrd.
    Also the core os ( most minimal installation ) has many different tools and libs.

    Also at time of release they can pick from many different versions of a single package.
    That in combination with what version of GCC and compile flags can and does make a huge differance.

    And at least with Debian you really do know how the systems was build, with RedHat I still wonder...

    Marcel
  • by Zantetsuken (935350) on Friday March 14, 2008 @02:46AM (#22748592) Homepage
    Because each major distro, while they use the same base kernel, GNU command line tools, and same GNOME/KDE environment, can have radically different kernel extensions and drivers implemented by one distro doing development but not another. If you're using whatever GUI tools a distro provides, they can each configure the same backend very differently, which depending on how the tool writes the config file can also effect stability, security, and other functions. Also Fedora/RHEL and tends to use tools created or modified by Red Hat specifically while those aren't easily available for Debian or SuSe, which have their own tools in the same manner.
  • Re:Debian? (Score:5, Informative)

    by Thijssss (655388) on Friday March 14, 2008 @03:05AM (#22748654)
    Debian works out just fine for these kind of tasks. Here in the Netherlands the national compute cluster Lisa runs on Debian (http://www.sara.nl/userinfo/lisa/description/index.html) with 800~ to a 1000 nodes (I think the page needs updating by now).
  • Re:Right... (Score:2, Informative)

    by aquarajustin (1070708) on Friday March 14, 2008 @03:12AM (#22748676)
    All I can say is that I enjoy running Debian servers and RHEL clients at my work... and you're a douche...
  • by Xero_One (803051) on Friday March 14, 2008 @03:30AM (#22748724)
    Debian will run multiple services reliably under heavy load. From my limited experience, it's one of those distros where you "Set It And Forget It" and that's that.

    Once you got it configured the way you want it, there's little intervention involved to maintaining it. It'll just keep chugging along. The keyword there is "correctly". Follow the readmes, howtos, and best practices, and you're golden.

    It's also one of the oldest distributions which always kept to the spirit of GNU/Linux in general: community development and enrichment. Debian developers pride themselves on that spirit. To make the best software for humans. (At least that's what I gather from hanging out with Debian folk) These people are not only passionate in the software that they write, they do it without wanting anything in return, being humble in the way they do it, and wanting no reward for doing it. To them, their reward is in other people using their software and loving it! In my opinion they're not recognized enough.

    But what do I know? I just use the software.
  • Re:Debian? (Score:5, Informative)

    by Bronster (13157) <slashdot@brong.net> on Friday March 14, 2008 @03:41AM (#22748778) Homepage
    The binary package management really says it all.. you shouldn't be running anything but compiled source on a performance cluster.

    Wow - how many performance clusters do you run again?

    Not that I run a "performance cluster" as such - but I do run a bunch of machines that are very busy, all on Debian.

    You know what? We compile the couple of programs where CPU is the bottleneck from source. We also compile Cyrus IMAP from source because we apply a pile of patches, but if someone else was packaging up all those patches in upstream, I'd be happy for them to be compiled there. Disk IO is the issue with Cyrus, and a custom compile won't help with that.

    Yeah, we build our own kernels as well - that's another point that's worth the effort to customise. /bin/ls though? I don't think it matters to anyone on a high performance cluster. Just so long as the cluster apps are optimised then the rest is just noise - better to have a system that's less work for your administrators so they can concentrate on what's important.
  • by prefect42 (141309) on Friday March 14, 2008 @04:39AM (#22748962)
    You don't have to wonder with RedHat. Just look at the SRPMs and see what patches they've applied.
  • Re:Scalable ? (Score:3, Informative)

    by onebuttonmouse (733011) <obm@stocksy.co.uk> on Friday March 14, 2008 @04:59AM (#22749034) Homepage
    Perhaps they mean it is scalable in the sense that one could simply add more machines to the cluster, rather than adding more cores to the machines already in place.
  • Re:Debian? (Score:3, Informative)

    by EvilIdler (21087) on Friday March 14, 2008 @06:11AM (#22749282)
    Nope, it's one big, happy family :)
  • Re:Debian? (Score:3, Informative)

    by chthon (580889) on Friday March 14, 2008 @06:12AM (#22749284) Homepage Journal

    Every time I want to install Ubuntu on some random machine, it fails. I always have to go back to Debian.

    I have Debian currently installed for my father and my sister. Spares me the headaches of Windows problems. The only support I need to deliver to them is giving information about performing tasks.

  • by IkeTo (27776) on Friday March 14, 2008 @07:27AM (#22749474)
    This is inaccurate, as a long time Debian user I really cannot resist in correcting them.

    > Debian "unstable" Sid is upgraded every day, or at least several times per week.

    True.

    > Debian "testing" is upgraded several times a month.

    Wrong. Debian testing is updated automatically from packages in Debian unstable. The difference is simply that a package has to sit in Debian unstable for a few days, and no significant bugs can be introduced by the new package, before it is updated. Since the process is automatic, Debian testing is updated just slightly less continuously as unstable (it depends on the robot to check the package dependencies and bug reports rather than the maintainer to upload a new version).

    The only time when the update rate is seen as low as less than that is when testing is in deep freeze, i.e., a new stable is about to be created.

    > Debian "stable" is upgraded every one or two years.

    It usually takes slightly longer than two years.

    > The only one I have avoided is "Debian experimental"... :)

    You cannot have a pure "Debian experimental" system. Debian experimental are subsystems that could have profound effect on the rest of the system, and so is provided for trial in isolation. E.g., back in the Gnome 1 to Gnome 2 transition days, or XFree 3 to XFree 4 days, these subsystems are tested in experimental before moving to unstable. These packages are supposed to be used on top of or to replace some unstable packages. Since they affects one particular subsystem, experienced testers can try one particular one based on their needs.
  • Re:Right... (Score:2, Informative)

    by cloakable (885764) on Friday March 14, 2008 @09:33AM (#22750250)
    Um, your unfamiliarity with Debian is painfully showing. apt-get update doesn't destroy servers. apt-get upgrade might, if you're running testing or unstable. I'd recommend you use neither for production servers, and stick with stable.

    And consistency? Like how the entire Debian repository is cross checked every day to ensure consistency?

    I'm also intrigued by your reference to updates destroying servers. Do you get this behaviour with Red Hat? Makes me glad I'm not using it, then.

    And managing an arbitrary number of nodes under Debian is easy with clusterssh.
  • by dwater (72834) on Friday March 14, 2008 @09:37AM (#22750300)

    Also, their supercomputer may just be outdated, not necessarily because of bloated software. I don't know how well SGI's products and support survived their recent bankrupcy, but I'd imagine not too well (though they seem to have built the Xeon-based #3 from the Top 500 recently).
    AFAIK, SGI still supports IRIX computers, but it doesn't do new ones - hasn't for several years. They do Linix of course - looks [sgi.com] like you can have SuSe or RedHat - and they also do some Microsoft thing too. I'm sure SGI could have serviced this customer, but they clearly didn't want to pay for support. You get what you pay for, and SGI still have very smart/clever people....
  • Re:Debian? (Score:3, Informative)

    by street struttin' (1249972) on Friday March 14, 2008 @09:51AM (#22750450)

    I agree. I used to run some clusters for the UCLA Chemistry department and the only real customizations we did was to install a custom kernel in Redhat 9 to handle the huge amount of memory we had installed. And even that wasn't in all the clusters. But yeah, the code the clusters was actually running was pretty much always compiled by hand.

  • by fm6 (162816) on Friday March 14, 2008 @12:24PM (#22752050) Homepage Journal
    It's not news that an old system was replaced by a new system. It is interesting that an old supercomputer wasn't replaced by a new supercomputer; a cluster of cheap commodity systems does the job just as well when you don't need real-time performance. This sort of creative use of PCs is what drove SGI into bankruptcy and irrelevance [sgi.com].

    This Philipine newspaper story [manilatimes.net] fills in some important details missing from the Australian PC News article: the age of the SGI system (10 years) and the reason it was costing so much to run (expensive to get application support for IRIX [wikipedia.org], an OS that hasn't had a major update in the same 10-year period).

    This last issue is what really killed the SGI system: not its age (these big installations are often around for decades), but the fact that only a few people are working on SGI platforms any more, and those that do can command premium prices. If the system had been from Sun, HP, or IBM, or any company with an OS still under active development, it might have been cost effective to keep it in place. This is particularly relevant on Slashdot, where we're always hearing from folks who just don't understand why there isn't better application support for their favorite platform.

    I'm still curious as to what specific SGI system got junked. Best guess: a low-end Origin [tamu.edu].
  • by emag (4640) <slashdot AT gurski DOT org> on Friday March 14, 2008 @02:52PM (#22753618) Homepage
    Well, one of the things I'm running into, having walked into a RHEL-heavy shop, is that every single RH box has what I've come to derisively refer to as /usr/local/hell. Every. Single. One. Basically, because of the extreme pain of upgrading (or others' laziness, though my limited experience in the distant past says it's mostly the former), we have RHEL3, RHEL4, and RHEL5 servers, all at whatever the then-current patchlevel was (AS, ES, WS, BS, XYZS, ASS...Taroon, Macaroon, Buffoon...you get the idea), that have almost everything important duplicated in /usr/local, built from tarballs. Can't remove the system-supplied stuff, since what's left that expects it will balk, but can't use it either, since users or security concerns dictate significantly newer releases of everything from Apache to perl to php to mysql to...

    This is, of course, a nightmare. Worse, the kernels on all of these have the notorious RH patchsets, so as far as anyone knows, each and every one has a mish-mash of backported "features" from newer kernels, but few of the bugs fixed that those newer ones have. In fact, several are still at 2.4.x kernels that, even years later, suffer from a kswapd problem that makes the machines unusable after a while. And we're getting in newer hardware that the 2.6 kernels that ARE supplied don't support. Everyone here has given up trying to build a plain vanilla kernel from the latest sources, because there are so many RH-applied patches to their kernels that may or may not even be applicable or available for the vanilla Linus kernels, that no one can say with any degree of certainty that the system will even boot. With Debian, I gave up on vanilla kernels because I was just tired of sitting through recompiles for the "advantage" of just removing a few things that were modules that I knew I would never use, customized to each of a half-dozen machines.

    With Debian, which I've run for years without a "reinstall", updates are significantly simpler to perform, and if you want to throw backports.org into your sources.list (which may or may not be a fair thing to do), even *stable* has 2.6.22, or 2.6.18 withouth bpo in the mix. Remote updates? No problem, Debian was *designed* to be updatable like that from the start. The dpkg/apt* tools are still worlds ahead of the RH (and SUSE) equivalents. Dependencies are easier to deal with, as are conflicts, and security.d.o seems to be a lot more on the ball about patches than anyone else.

    In fact, I'm often telling our security guys about updates that fix vulnerabilities that their higher-up counterparts haven't started riding them about yet, so they can get a head start on going through the major hassle of begging/cajoling/threatening the RH admins to grab the latest sources, rebuild, and re-install so we don't get slammed on the next security scan for having vulnerable servers. Not that it's ever "the latest", but always "the least amount of upgrade needed to avoid being flagged", which means that next month we go through this all again. With Debian, "aptitude update ; aptitude upgrade" (or just "aptitude"/whatever and then manually select packages, though in stable, it's only going to be fixes anyway most of the time), and the handful of systems I've gotten in under the radar are up-to-date security-wise with significantly less effort.

    Even the "you get what you pay for in terms of support" canard has been proven to be false. We had a couple brand-new freshly-patched RHEL5 systems that just would not stay up. First thing RH Support has us do is run some custom tool of theirs before they'll even attempt to help us. A tool that, I should add, is rather difficult to run on a machine that goes down within minutes of coming up. Finally we re-installed and didn't apply the RH-sanctioned updates. Machine...stays up. Same thing with some RH-specific clustering software. Another RH-approved release resulted in...no clustering at all. For whatever reason, re-installing the prior RPM wasn't possible, but the
  • by IkeTo (27776) on Saturday March 15, 2008 @11:07AM (#22759554)
    > I don't think the process of migrating packages from unstable to testing is quite as
    > automatic as you describe. At least, the most important packages (like linux, gcc, glibc,
    > dpkg, python, xorg, gnome, kde, ...) don't migrate automatically. These transitions are
    > made only when the maintainers think they're ready to be included in the next stable
    > debian release and when they're sure that they don't break anything.

    The process is automatic. There is even a script to tell you why a particular package in unstable is not yet in testing (see http://bjorn.haxx.se/debian/testing.pl?package=firefox [bjorn.haxx.se]). The following description is from Debian (http://www.debian.org/devel/testing):

    > The "testing" distribution is an automatically generated distribution. It is generated
    > from the "unstable" distribution by a set of scripts which attempt to move over packages
    > which are reasonably likely to lack release-critical bugs. They do so in a way that
    > ensures that dependencies of other packages in testing are always satisfiable.

    Given the rule that new upload cannot break dependencies before entering testing, it is natural that unless there is some "manual pushing", major updates to these highly depended on packages nearly never happen automatically. The manual manipulation is done to let (force?) them in testing anyway, even though some other package would become uninstallable.

I am the wandering glitch -- catch me if you can.

Working...