Debian Cluster Replaces Supercomputer For Weather Forecasting 160
wazza brings us a story about the Philippine government's weather service (PAGASA), which has recently used an eight-PC Debian cluster to replace an SGI supercomputer. The system processes data from local sources and the Global Telecommunication System, and it has reduced monthly operational costs by a factor of 20. Quoting:
"'We tried several Linux flavours, including Red Hat, Mandrake, Fedora etc,' said Alan Pineda, head of ICT and flood forecasting at PAGASA. 'It doesn't make a dent in our budget; it's very negligible.' Pineda said PAGASA also wanted to implement a system which is very scalable. All of the equipment used for PICWIN's data gathering comes off-the-shelf, including laptops and mobile phones to transmit weather data such as temperature, humidity, rainfall, cloud formation and atmospheric pressure from field stations via SMS into PAGASA's central database."
Re:Debian? (Score:5, Informative)
I hated Debian at first because it wasn't friendly, but I looked into it more and realized it was the best choice I could make for my production servers. I can set them up and check once a week or so and they're still chugging along without need of intervention.
I wouldn't use Debian on my desktop (I use Kubuntu), but it can't be beat for servers.
It's NOT a desktop distro. Especially compared to Mandr* or Ubuntu or many others out there.
Re:dent in the budget (Score:4, Informative)
Re:dent in the budget (Score:3, Informative)
Re:I don't understand the difference (Score:5, Informative)
Also the core os ( most minimal installation ) has many different tools and libs.
Also at time of release they can pick from many different versions of a single package.
That in combination with what version of GCC and compile flags can and does make a huge differance.
And at least with Debian you really do know how the systems was build, with RedHat I still wonder...
Marcel
Re:I don't understand the difference (Score:4, Informative)
Re:Debian? (Score:5, Informative)
Re:Right... (Score:2, Informative)
Re:I don't understand the difference (Score:5, Informative)
Once you got it configured the way you want it, there's little intervention involved to maintaining it. It'll just keep chugging along. The keyword there is "correctly". Follow the readmes, howtos, and best practices, and you're golden.
It's also one of the oldest distributions which always kept to the spirit of GNU/Linux in general: community development and enrichment. Debian developers pride themselves on that spirit. To make the best software for humans. (At least that's what I gather from hanging out with Debian folk) These people are not only passionate in the software that they write, they do it without wanting anything in return, being humble in the way they do it, and wanting no reward for doing it. To them, their reward is in other people using their software and loving it! In my opinion they're not recognized enough.
But what do I know? I just use the software.
Re:Debian? (Score:5, Informative)
Wow - how many performance clusters do you run again?
Not that I run a "performance cluster" as such - but I do run a bunch of machines that are very busy, all on Debian.
You know what? We compile the couple of programs where CPU is the bottleneck from source. We also compile Cyrus IMAP from source because we apply a pile of patches, but if someone else was packaging up all those patches in upstream, I'd be happy for them to be compiled there. Disk IO is the issue with Cyrus, and a custom compile won't help with that.
Yeah, we build our own kernels as well - that's another point that's worth the effort to customise.
Re:I don't understand the difference (Score:4, Informative)
Re:Scalable ? (Score:3, Informative)
Re:Debian? (Score:3, Informative)
Re:Debian? (Score:3, Informative)
Every time I want to install Ubuntu on some random machine, it fails. I always have to go back to Debian.
I have Debian currently installed for my father and my sister. Spares me the headaches of Windows problems. The only support I need to deliver to them is giving information about performing tasks.
Re:One thing always missing from such stories... (Score:5, Informative)
> Debian "unstable" Sid is upgraded every day, or at least several times per week.
True.
> Debian "testing" is upgraded several times a month.
Wrong. Debian testing is updated automatically from packages in Debian unstable. The difference is simply that a package has to sit in Debian unstable for a few days, and no significant bugs can be introduced by the new package, before it is updated. Since the process is automatic, Debian testing is updated just slightly less continuously as unstable (it depends on the robot to check the package dependencies and bug reports rather than the maintainer to upload a new version).
The only time when the update rate is seen as low as less than that is when testing is in deep freeze, i.e., a new stable is about to be created.
> Debian "stable" is upgraded every one or two years.
It usually takes slightly longer than two years.
> The only one I have avoided is "Debian experimental"...
You cannot have a pure "Debian experimental" system. Debian experimental are subsystems that could have profound effect on the rest of the system, and so is provided for trial in isolation. E.g., back in the Gnome 1 to Gnome 2 transition days, or XFree 3 to XFree 4 days, these subsystems are tested in experimental before moving to unstable. These packages are supposed to be used on top of or to replace some unstable packages. Since they affects one particular subsystem, experienced testers can try one particular one based on their needs.
Re:Right... (Score:2, Informative)
And consistency? Like how the entire Debian repository is cross checked every day to ensure consistency?
I'm also intrigued by your reference to updates destroying servers. Do you get this behaviour with Red Hat? Makes me glad I'm not using it, then.
And managing an arbitrary number of nodes under Debian is easy with clusterssh.
Re:dent in the budget (Score:3, Informative)
Re:Debian? (Score:3, Informative)
I agree. I used to run some clusters for the UCLA Chemistry department and the only real customizations we did was to install a custom kernel in Redhat 9 to handle the huge amount of memory we had installed. And even that wasn't in all the clusters. But yeah, the code the clusters was actually running was pretty much always compiled by hand.
Re:One thing always missing from such stories... (Score:3, Informative)
This Philipine newspaper story [manilatimes.net] fills in some important details missing from the Australian PC News article: the age of the SGI system (10 years) and the reason it was costing so much to run (expensive to get application support for IRIX [wikipedia.org], an OS that hasn't had a major update in the same 10-year period).
This last issue is what really killed the SGI system: not its age (these big installations are often around for decades), but the fact that only a few people are working on SGI platforms any more, and those that do can command premium prices. If the system had been from Sun, HP, or IBM, or any company with an OS still under active development, it might have been cost effective to keep it in place. This is particularly relevant on Slashdot, where we're always hearing from folks who just don't understand why there isn't better application support for their favorite platform.
I'm still curious as to what specific SGI system got junked. Best guess: a low-end Origin [tamu.edu].
Re:I don't understand the difference (Score:4, Informative)
This is, of course, a nightmare. Worse, the kernels on all of these have the notorious RH patchsets, so as far as anyone knows, each and every one has a mish-mash of backported "features" from newer kernels, but few of the bugs fixed that those newer ones have. In fact, several are still at 2.4.x kernels that, even years later, suffer from a kswapd problem that makes the machines unusable after a while. And we're getting in newer hardware that the 2.6 kernels that ARE supplied don't support. Everyone here has given up trying to build a plain vanilla kernel from the latest sources, because there are so many RH-applied patches to their kernels that may or may not even be applicable or available for the vanilla Linus kernels, that no one can say with any degree of certainty that the system will even boot. With Debian, I gave up on vanilla kernels because I was just tired of sitting through recompiles for the "advantage" of just removing a few things that were modules that I knew I would never use, customized to each of a half-dozen machines.
With Debian, which I've run for years without a "reinstall", updates are significantly simpler to perform, and if you want to throw backports.org into your sources.list (which may or may not be a fair thing to do), even *stable* has 2.6.22, or 2.6.18 withouth bpo in the mix. Remote updates? No problem, Debian was *designed* to be updatable like that from the start. The dpkg/apt* tools are still worlds ahead of the RH (and SUSE) equivalents. Dependencies are easier to deal with, as are conflicts, and security.d.o seems to be a lot more on the ball about patches than anyone else.
In fact, I'm often telling our security guys about updates that fix vulnerabilities that their higher-up counterparts haven't started riding them about yet, so they can get a head start on going through the major hassle of begging/cajoling/threatening the RH admins to grab the latest sources, rebuild, and re-install so we don't get slammed on the next security scan for having vulnerable servers. Not that it's ever "the latest", but always "the least amount of upgrade needed to avoid being flagged", which means that next month we go through this all again. With Debian, "aptitude update ; aptitude upgrade" (or just "aptitude"/whatever and then manually select packages, though in stable, it's only going to be fixes anyway most of the time), and the handful of systems I've gotten in under the radar are up-to-date security-wise with significantly less effort.
Even the "you get what you pay for in terms of support" canard has been proven to be false. We had a couple brand-new freshly-patched RHEL5 systems that just would not stay up. First thing RH Support has us do is run some custom tool of theirs before they'll even attempt to help us. A tool that, I should add, is rather difficult to run on a machine that goes down within minutes of coming up. Finally we re-installed and didn't apply the RH-sanctioned updates. Machine...stays up. Same thing with some RH-specific clustering software. Another RH-approved release resulted in...no clustering at all. For whatever reason, re-installing the prior RPM wasn't possible, but the
Re:One thing always missing from such stories... (Score:3, Informative)
> automatic as you describe. At least, the most important packages (like linux, gcc, glibc,
> dpkg, python, xorg, gnome, kde,
> made only when the maintainers think they're ready to be included in the next stable
> debian release and when they're sure that they don't break anything.
The process is automatic. There is even a script to tell you why a particular package in unstable is not yet in testing (see http://bjorn.haxx.se/debian/testing.pl?package=firefox [bjorn.haxx.se]). The following description is from Debian (http://www.debian.org/devel/testing):
> The "testing" distribution is an automatically generated distribution. It is generated
> from the "unstable" distribution by a set of scripts which attempt to move over packages
> which are reasonably likely to lack release-critical bugs. They do so in a way that
> ensures that dependencies of other packages in testing are always satisfiable.
Given the rule that new upload cannot break dependencies before entering testing, it is natural that unless there is some "manual pushing", major updates to these highly depended on packages nearly never happen automatically. The manual manipulation is done to let (force?) them in testing anyway, even though some other package would become uninstallable.