A New Approach To Linux Clusters 143
rkischuk writes: "InformationWeek has an article about a group of ex-Cray engineers working on a new architecture for clustering Linux systems. 'It's not easy trying to build scalable systems from commodity hardware designed for assembling desktop computers and small servers.' Per the article, 'As the number of CPUs in a Beowulf-style cluster-a group of PCs linked via Ethernet-increases and memory is distributed instead of shared, the efficiency of each processor drops as more are added,' but 'Unlimited's solution involves tailoring Linux running on each node in a cluster, rather than treating all the nodes as peers.'" Looks like Cray engineers think about clustering even when they're not at Cray.
Re:actually it shows why Cray always does so well. (Score:3, Interesting)
Just to sumarise my basic point: The Top 500 is a benchmark and is not necesarily a good indication of who makes better computers than who.
this brings up something.. (Score:3, Interesting)
From the last Cray notice (Score:2, Interesting)
Well, it looks like there are people working on the task. But that's not the real point, the right tool for the right job is the point. A whole lot of processes that do not require another process to finish before the next one is where Beowulfs shine, if you want throughput or process with dependancies then a mainframe is your best bet.
But it's still nice to have an alternative for those of us who cannot afford a mainframe.
DanH
Request for help (Score:5, Interesting)
The user writes a small chunk of code that calculates the function they're trying to fit the data to. We require the user to code the function him/herself because speed is important, and some of these functions are too difficult for Mathematica or the like to fit. Once the user writes their function, it's linked (dynamically) with the rest of the code. The user then passes in a parameter file, and away it goes.
Many of these fits can take days, and, since they often have to be repeated many times with slight changes to the fitted function or initial parameters, this is a serious concern.
Can this new approach to Linux clusters be used here? We have tons of Linux boxes lying around that are being used for other things, but have lots and lots of spare cycles. We probably couldn't afford a dedicated processing farm, but we could easily live with something like distributed.net where the program transparently takes all the spare cycles.
I know the problem is parallelizable, since each node can calculate the value of the function at a few of the data points, then send back to the "master" the chi-squared contribution of those points. Each iteration of the fitting process, the master sends out the current parameter values, and then the nodes grind away... There's not too much communication required.
One of my big concerns is how to get the user-written function from the "master" computer to all the "slaves". It's unrealistic to expect the user to manually install it on all the machines each time something in the function gets tweaked and it's recompiled. Are there pre-existing standards on how to send code to nodes in a cluster, then have it executed?
Any advice or pointers to good starting places on distributed computing would be much appreciated.
BTW, as a hint to all the other comp sci geeks out there--physics is a great place to find new and challenging computing problems (I'm not claiming this is one). In particular, the particle physics people often have to deal with spectacular data rates, and do extremely complicated event reconstruction. Check it out some time.
Plan 9 style architecture? (Score:1, Interesting)
Gnutella parallel... (Score:2, Interesting)
(I know--not the best analogy)
Re:this brings up something.. (Score:2, Interesting)
MOSIX works fucking awesome. I used it to compress MP3s. I ripped waves and stored them on my bad-ass machine. Then I ran a at daemon on my slowest machine and ran compression routines in the at q in batch mode. By running on the slowest machine in the group it guaranteed that the jobs would migrate to the faster machines in the group and the slowest machine remained as a "task manager". It would run instances until every machine was busy, then batch would hold jobs until a machine came free, then it would release and on with the show.
It works EXTREMELY well. Try MOSIX for some serious fun.
if I understand correctly... (Score:2, Interesting)
Huge Market for Supercomputers Will Come... (Score:3, Interesting)
The reason that high speed computing has not taken off is that there are currently no consumer apps that require it. Only a few scientific, research and governmental organizations have a need for it. However, let's say there is a breakthrough in AI technology, it will require googles of CPUs and memory. And when that happens, the market will explode.
People are going to want their mechanical maids, baby sitters, gardeners, chauffeurs, lawyers, companions, stock market experts, and what not. I predict they are going to crave their mechanical servants to the point of pathological obssession.
Don't be so sure this won't happen in your lifetime. In fact, there is every reason to suppose that it might happen anytime. There is an awful lot of minds thinking about intelligence and an awful lot of money being spent on it right now. IMO, the solution to the intelligence problem is probably simple. As Dr. Rodney Brooks of MIT says, "Maybe this is wishful thinking, but maybe there really is something that we're missing." Any day now.
In conclusion, I would recommend that you don't sell your shares in the supercomputing sector just yet.
Re:Request for help (Score:1, Interesting)