

A New Approach To Linux Clusters 143
rkischuk writes: "InformationWeek has an article about a group of ex-Cray engineers working on a new architecture for clustering Linux systems. 'It's not easy trying to build scalable systems from commodity hardware designed for assembling desktop computers and small servers.' Per the article, 'As the number of CPUs in a Beowulf-style cluster-a group of PCs linked via Ethernet-increases and memory is distributed instead of shared, the efficiency of each processor drops as more are added,' but 'Unlimited's solution involves tailoring Linux running on each node in a cluster, rather than treating all the nodes as peers.'" Looks like Cray engineers think about clustering even when they're not at Cray.
this brings up something.. (Score:3, Interesting)
Re:this brings up something.. (Score:2, Informative)
Re:this brings up something.. (Score:1)
Beowulf Clusters are not that hard to build. the only differnce between building an 18 node cluster and a 4 node cluster is the number of computers. If you have the hardware, some free time, and the patience, then setting one up is not that hard. The hard part is actually doing something with one. Programming in MPI and PVM is not an easy task even with a degree in CS (unless you have no life). Check out Beowulf.org [beowulf.org] for more info.
Custom software (Score:2)
But as mentioned in a previous post, Mosix can do that for you, if and only if your program can use several instances at the same time. Compressing MP3s is a good example.
Re:this brings up something.. (Score:2, Informative)
The best solution for any distributed computing problem depends on that problem. How CPU intensive is the job? How much data will need to be distributed to nodes. Do intermediate processing steps require intermediate answers from other nodes? How fast is the CPU? How fast is the network?
Basically, if you have a lot of processing that could be manually distributing to a bunch of hosts, via rsh, or rlogin, then MOSIX can be used to easily manage/monitor that work with no coding. For harder problems that couldn't be manually distributed you might need MPI or PVM with special code in order to do the equivalent of "threaded" distributed computing.
Manual distribution is often easiest when you have network shared filesystems ( NSF, etc.,) and so is MOSIX.
MPI is short for Message Passing Interface. You can use MPI libs to do interprocess/interhost messaging and I/O on non homogenous networks. MPI does not require shared filesystems, though your own project might use them. MPI is certainly easier to manage when you have shared filesystems. One must be careful to conider the I/O time involved in network read/writes. Also note that multiple network nodes will clobber each other's data if they all try to write to the same file over the network.
MPI, or PVM is often used when the problem of breaking the job into pieces, or putting the results back together is non-trivial. For instance, if you were doing very processor intensive image processing on a large file, you may need to break the image into pieces, or tiles, and then distribute the processing of the tiles, with some processors sharing intermediate results, then stitch the results back into one file and finally write that file.
The approach that Unlimited Scale is using only makes sense in limited cases, i.e.., when computers are " getting bogged down in processing interrupt requests from peripherals." In general, multithreaded processing, or even distributed processing, only makes sense when I/O time is dwarfed by CPU time. SMP machines have an advantage in that they have really fast, communication between nodes, compared to Ethernet. Beowulf clusters have relatively slow communication between nodes. Beowulf clusters can only really be effective in the more CPU bound and less I/O intensive jobs. But if you have a job that can be run faster on a Linux cluster, you can save the big bucks on your initial hardware purchase. The more CPU intesive the processing is, the slower the network you can put up with. SETI is a good example of this. If it takes 12 hours to process a packet of data, then I/O over a 56K modem is OK.
Re:this brings up something.. (Score:5, Informative)
Re:this brings up something.. (Score:2, Interesting)
MOSIX works fucking awesome. I used it to compress MP3s. I ripped waves and stored them on my bad-ass machine. Then I ran a at daemon on my slowest machine and ran compression routines in the at q in batch mode. By running on the slowest machine in the group it guaranteed that the jobs would migrate to the faster machines in the group and the slowest machine remained as a "task manager". It would run instances until every machine was busy, then batch would hold jobs until a machine came free, then it would release and on with the show.
It works EXTREMELY well. Try MOSIX for some serious fun.
LNXI has cool solution to cooling Beowulf clusters (Score:2)
From the last Cray notice (Score:2, Interesting)
Well, it looks like there are people working on the task. But that's not the real point, the right tool for the right job is the point. A whole lot of processes that do not require another process to finish before the next one is where Beowulfs shine, if you want throughput or process with dependancies then a mainframe is your best bet.
But it's still nice to have an alternative for those of us who cannot afford a mainframe.
DanH
Mainframe? (Score:1)
Isn't there a differnt term for a very fast computer? Maybe something like "super computer"?
Joe
Re:Mainframe? (Score:1)
Dan
In the words of Seymour Cray: (Score:5, Funny)
Two strong oxen or 1024 chickens?
Re:In the words of Seymour Cray: (Score:1, Funny)
That depends.... (Score:1)
because he upgradeded to two oxen, and I didn't
like to eat chicken, perhaps getting them to
plow a field would be useful?
Re:In the words of Seymour Cray: (Score:2)
I don't know. What's the tradeoff? Lots of chicken droppings on your field? The soil can use some nutrients, I'm sure. Can the chickens do it? Have you tried or is it a hunch of yours? Who eats more? The two oxen or the chickens? What about the sleeping place? Sure, those are a lot of chickens, but they don't have a problem if you sit them side to side and they don't have a problem with sleeping in three or four rows. Manageability, yes, that's a problem... now we see a real tradeoff. Being able to replace a chicken if it dies (and chickens are cheap) or one (possibly both) of your oxen, which are not cheap, versus manageability of the two oxen. Sure, feeding the chickens is also a problem, as well as collecting the eggs wow! byproducts! suddenly I can do something with the chickens that I couldn't do with the oxen.
I like this analogy.
Re:In the words of Seymour Cray: (Score:1)
If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?
I've also heard this attributed to one of Pyramid's VP's, while drawing comparisons to Sequent systems. Anyone remember Pyramid?
Temkin
Re:In the words of Seymour Cray: (Score:1)
If your task is just plowing a field, and you just had your standard plows, perhaps oxen would be the best solution. Have you ever tried hitching a bunch of chickens to a bunch of mini-plows? Might be worth a shot. Depending on how big a field, you could easily hire someone else to plow it for you and pay them in a couple of dozen chickens. Or rent a tractor for the price of some chickens.
Overall, if my life didn't depend on getting that field plowed, I might choose the chickens just to see if I could come up with an innovative solution to the problem. It would be more fun than walking behind some oxen, breaking my back. Reminds me of the canonical story of the student given the task of measuring the height of a building with no tools other than a barometer. Seems like the more intellectually interesting solutions would come out of using the chickens.
To sum it up, tilling a field with oxen sounds boring. Trying to achieve the same result with a 1000 chickens sound like fun (for a day or two, anyway.) Then, on to the next project. You want me to do what with a million monkeys?
Re:In the words of Seymour Cray: (Score:1, Funny)
Have them take another crack at writing the DMCA. That first one didn't come out so well.
Re:In the words of Seymour Cray: (Score:1)
Re:In the words of Seymour Cray: (Score:1)
Re:In the words of Seymour Cray: (Score:3, Funny)
Either one makes for a fine bar-b-que, once the plowin's done.
Re:In the words of Seymour Cray: (Score:1)
Two strong oxen or 1024 chickens?
Re:In the words of Seymour Cray: (Score:1)
1024 Oxen.
Re:In the words of Seymour Cray: (Score:2, Funny)
What next?
Kiwaiti
Re:In the words of Seymour Cray: (Score:1)
Please reboot.
Re:In the words of Seymour Cray: (Score:2, Insightful)
Re:In the words of Seymour Cray: (Score:1)
Well, if the chickens were $2 a piece, and the oxen were $25,000 per, and I only had $5,000 to my name, I'd have to say the chickens.
TERAS has part of it's clusters running linux.... (Score:1)
The fun part: parts of this huge machine are running Linux
For more closeup pictures see: http://unfix.org/news/sara/ [unfix.org]
Ain't it sweeeeeeeeeeet?
Parallel Sysplex anyone? (Score:2)
Voila you've reinvented parallel sysplex with VM for Linux running on 'cheap' hardware.
Except how cheap do they expect it to be?
Request for help (Score:5, Interesting)
The user writes a small chunk of code that calculates the function they're trying to fit the data to. We require the user to code the function him/herself because speed is important, and some of these functions are too difficult for Mathematica or the like to fit. Once the user writes their function, it's linked (dynamically) with the rest of the code. The user then passes in a parameter file, and away it goes.
Many of these fits can take days, and, since they often have to be repeated many times with slight changes to the fitted function or initial parameters, this is a serious concern.
Can this new approach to Linux clusters be used here? We have tons of Linux boxes lying around that are being used for other things, but have lots and lots of spare cycles. We probably couldn't afford a dedicated processing farm, but we could easily live with something like distributed.net where the program transparently takes all the spare cycles.
I know the problem is parallelizable, since each node can calculate the value of the function at a few of the data points, then send back to the "master" the chi-squared contribution of those points. Each iteration of the fitting process, the master sends out the current parameter values, and then the nodes grind away... There's not too much communication required.
One of my big concerns is how to get the user-written function from the "master" computer to all the "slaves". It's unrealistic to expect the user to manually install it on all the machines each time something in the function gets tweaked and it's recompiled. Are there pre-existing standards on how to send code to nodes in a cluster, then have it executed?
Any advice or pointers to good starting places on distributed computing would be much appreciated.
BTW, as a hint to all the other comp sci geeks out there--physics is a great place to find new and challenging computing problems (I'm not claiming this is one). In particular, the particle physics people often have to deal with spectacular data rates, and do extremely complicated event reconstruction. Check it out some time.
Re:Request for help (Score:2, Informative)
I would imagine you would use this as follows; first, you'd get some data points with which to calculate from a server. Then, you'd also get the name of a shared library which is on a server (NFS mounted probably). This library has a function name 'calc' or some such that does that calculation. You can then call that function, and post the results somewhere.
I would avoid using MPI or PVM, since those are not designed for farming out data the way you are. You should probably use your own job control protocol. Also, you might want to allow for multiple archetectures, naming the library foo-0.0.1.i386.so and foo-0.0.1.alpha.so and so forth,
Re:Request for help (Score:5, Informative)
The normal way to operate a cluster is to have a shared (NFS) file system across all the systems, thereby solving the data distribution problem (please note though that this prevents you from doing too much file base IO because it's too slow, you might want to make a local /scratch directory on each node)
Besides the NFS share you'll need some kind of parallel programming library like MPI or pvm, and a job scheduler of some sorts. The libraries you can find on the web (maybe in precompiler RPMS, look for the mpich MPI implementation for a start), and will provide you with a programming framework for doing all the networking and setting up the topology. The scheduler can be as simple as the one provided with MPI/pvm (ie. you name a few hosts and your job gets run on those), or, if there's a number of people accessing the cluster at the same time, you might want to try a real queuer (like gridware [sun.com]).
The parallellization is something you'll have to do yourself and it's the hardest part of clustering.
Hope this helps :-)
Re:Request for help (Score:1, Informative)
Re:Request for help (Score:2, Informative)
Use PVM or MPI. Both exist prepackaged for most major Linux distributions.
I'm trying to design a specialized data-fitting program to be used for accelerator-based condensed matter physics.
I think this may be a case where a bit more thinking and literature research about the problem would help a great deal. People solve extremely complex data fitting problems on modern PCs without the need for parallel processing, and there are very sophisticated algorithms for doing this. You should probably talk to local experts in statistics, computer science, and pattern recognition.
Re:Request for help (Score:2)
We're looking at using MINUIT [wwwinfo.cern.ch], a package written by the computing divsion at CERN [welcome.cern.ch], as our fitting engine. MINUIT's algorithms are quite advanced, and it's commonly recognized within the physics community as the best general-purpose fitting package out there.
I think you may not realize how complicated the functions we're trying to fit are. Here's the quick and simple version: we study magnetic fields within superconductors and semiconductors on a microscopic level. We do this by using spin-polarized muons or radioactive light ions as a probe, and measuring anisotropy of the emitted decay products. That data then has to be compared against complex models of superconductivity. The computationally expensive part here is calculating the values predicted by the model for a given set of parameters. This has to be done once for each data point to calculate chi-squared, and repeated many times (once in each iteration of the fitting process), each time with different parameters. The models typically contain difficult integrals which must be evaluated numerically thousands of times with very high precision.
Since the function we're trying to fit changes fairly often depending on the sample and measurement techniques used, it's not practical for us to spend huge amounts of time optimizing each individual function to be fitted. The fitting package is already optimized, so the only thing left is to parallelize it.
Re:Request for help (Score:1, Informative)
Set up nfs and then write a bash script that will execute whatever gets sent there.
When you wanted to run something, you would just copy it to the server nfs directory and all the nodes would process their respective datapoints, automatically.
Re:Request for help (Score:1, Interesting)
Some options (Score:2)
Use a shell script. (Score:1)
From the Beowulf FAQ [dnaco.net]:
3. Can I take my software and run it on a Beowulf and have it go faster?
[1999-05-13]
Maybe, if you put some work into it. You need to split it into
parallel tasks that communicate using MPI or PVM or network sockets or
SysV IPC. Then you need to recompile it.
Or, as Greg Lindahl points out, if you just want to run the same
program a few thousand times with different input files, a shell script
will suffice.
Dimishing Returns..... (Score:1)
Weird....
Re:Dimishing Returns..... (Score:1)
Kind of. You just start blindly adding nodes to a cluster, efficiency drops as the processes are deadlocked more and more, waiting for that relevant network traffic. You may not ever reach negative efficiency, but you will get no gains, therefore, just burning electricity.
Speaking of burning electricity, I wonder if I should start back up implementation of my crazy idea of a PC/Mac/Sun/NeXT/SGI/Alpha/VAX/RS6000 cluster...
Re:Dimishing Returns..... (Score:1)
Sure. Just take one example. Imagine a problem that requires the cluster to munch on a large set of data. The initial set up of the problem requires that all the processors get their piece(s) of data. Maybe this is done via an RPC style mechanism, or by reading from a common file share or any number of other techniuqes. Doesn't matter. That initial set up takes up network resources.
At some point, the network (or the shared disk, or something) will become a bottleneck. When it takes more time to get the data through that bottleneck than it takes for the processing to actually complete, then you've reached a point of negative returns: adding nodes decreases performance.
Re:Dimishing Returns..... (Score:1)
They discovered this when designing computer algorithmns for parallel systems. So even when there's no Operating system running the CPU's - only the program (ie "Occam" like), the overall overhead needed for the CPU's to cooperate, and safe concurrency, simply becomes too big.
Unimaginable? What's the deal:
* amount of productive CPU-cycles for overhead (Po) per second
* amount of improductive CPU-cycles for synchronizing waits (Pw) (waits for other CPU's, waits for communication bus, wait for memory access etc. -> concurrency cycles) per second
* amount of real CPU-cycles per second (Rc)
* amount of application productive CPU-cycles (Pa) per second
Now put those into a formula, and suppose that Cs is a constant... or a slow changing constant:
Total overhead cycles: Toc=(c1*N*Po)+(c2*N*Pw) (where N is number of CPU's, c1 and c2 some constants...)
Total application productive cycles per CPU, per second: Pa = Rc - Toc
The formula's aren't probably like this, but I'm trying to give an indication. From "Toc" we now know that Pa reduces (above lineairly - probably not true in real life) with every added CPU.
Re:My head hurts! (Score:1)
However, after reading the article, I suggest you go back and update all the previous "Imagine a Beowulf Cluster of ..." posts, as they're now rendered out of date.
Thanks.
Plan 9 style architecture? (Score:1, Interesting)
Re:Plan 9 style architecture? (Score:1)
The licensing looks liberal enough (it's labelled "PLAN 9 OPEN SOURCE LICENSE AGREEMENT" license), but how is it related to the GPL?
-bch
Those Fools! (Score:4, Funny)
Steven
Really so new? (Score:1)
Root DOWN
grep what -i sed?
I wonder... (Score:1, Funny)
...
(clears throat)
...
"Imagine a beowulf cluster of these things!"
-J5K
Cray machines are all about parallel processing (Score:2)
Well, duh, Cray machines are massively parallel processing machines, so they're not clusters in the sense that they don't use network cards and separate computers as basic computing units, <OVERSIMPLIFICATION>the processors talk to each other on the same bus and share the same memory</OVERSIMPLIFICATION>, but basically in either case it's about parallel processing. I *hope* Cray engineers think about clusters. I'd hate to see them think about single Athlon supercomputers ...
Re:Cray machines are all about parallel processing (Score:2)
The CRAY 1, 2, XMP, etc etc are all VECTOR machines. Some of them happen to be parallel vector machines (multiple VECTOR processors)
The T3 series are the MPP boxes. Cray's bread and butter was VECTOR machines though. MPP came about because some problems aren't easily vectorizable (but can run on MPPs, oddly enough).
Gnutella parallel... (Score:2, Interesting)
(I know--not the best analogy)
The Real Problem (Score:2)
Mixed architectures. (Score:1)
A cheaper alternative (Score:4, Funny)
Fixed link (Score:2, Funny)
Furbeowulf [trygve.com]
Funny. :)
From this week's Byte (Score:2)
old ideas come back around, if they are good (Score:2, Informative)
Guess who the CDC6400 designer was? Seymour Cray.
Re:old ideas come back around, if they are good (Score:2)
When I heard a CS professor talk about putting multiple CPUs on a single chip, I suggested that one of the CPUs should be dedicated to the OS, which would mean that it wouldn't even need a FP unit. So for (say) 4 or 8 computers on a chip, one would be a "OS server" and the others would be "application servers". Ditching the FP on the "OS server" might allow an extra-high-performance design for it. And the others would only need context switches when the OS demanded it, rather than one for every stinking interrupt that came along.
Re:old ideas come back around, if they are good (Score:2)
There was only one PPU on the CDC 6600, equipped with a hardware multiprogramming unit to make it look like ten independent CPUs. The PPU had ten sets of machine state and I/O channels, but only one arithmetic-logic unit (ALU). This was back when the ALU was the expensive part of the CPU.
one of the CPUs should be dedicated to the OS
MacOS 8 on multiprocessors was the last major commercial incarnation of that idea. But that was an ugly hack to put multiprocessing on a uniprocessor OS.
There's a lot to be said for channelized I/O, like mainframes have. All the peripherals look roughly similar to the OS, security is better because peripherals can't write all over memory, and there's less diversity in drivers. Intel tried this, but ran into troubles because they made the channel controllers fully programmable and put a little OS under Windows to run them. Microsoft hates it when you put stuff under their OS.
Wouldn't HURD be more suitable for such task? (Score:1)
actually it shows why Cray always does so well.. (Score:3, Insightful)
And yet. (Score:1)
Re:actually it shows why Cray always does so well. (Score:1)
Unfortunately, it contains absolutely no info on what hey are up to.
groups.google has a tiny bit more [google.com].
And a bit on their funding [localbusiness.com].
Anybody got any more info?
Re:actually it shows why Cray always does so well. (Score:1)
Re:actually it shows why Cray always does so well. (Score:3, Informative)
Not one single of these are made in the U.S.
Cray today is a name. It's a brand. It's not a manufacturer of high performance computers.
Just for the record, IBM produced the two fastest computers currently, Intel the third, IBM the fourth, Hitachi the fifth, SGI, IBM, NEC, IBM, IBM, and Finally, number *11* is a Cray based on the Alpha processor (the T3E).
So, tell me again, who was playing catch-up with who ?
Re:actually it shows why Cray always does so well. (Score:3, Interesting)
Just to sumarise my basic point: The Top 500 is a benchmark and is not necesarily a good indication of who makes better computers than who.
Re:actually it shows why Cray always does so well. (Score:2)
That's not a completely unfair benchmark - but of course you're right it's a benchmark and therefore it does not cover every possible problem out there. However, it is based on the common linear-algebra routines that are the core of a very large part of the scientific computing problems being run out there.
Re:actually it shows why Cray always does so well. (Score:2)
Re:actually it shows why Cray always does so well. (Score:1)
Huge Market for Supercomputers Will Come... (Score:3, Interesting)
The reason that high speed computing has not taken off is that there are currently no consumer apps that require it. Only a few scientific, research and governmental organizations have a need for it. However, let's say there is a breakthrough in AI technology, it will require googles of CPUs and memory. And when that happens, the market will explode.
People are going to want their mechanical maids, baby sitters, gardeners, chauffeurs, lawyers, companions, stock market experts, and what not. I predict they are going to crave their mechanical servants to the point of pathological obssession.
Don't be so sure this won't happen in your lifetime. In fact, there is every reason to suppose that it might happen anytime. There is an awful lot of minds thinking about intelligence and an awful lot of money being spent on it right now. IMO, the solution to the intelligence problem is probably simple. As Dr. Rodney Brooks of MIT says, "Maybe this is wishful thinking, but maybe there really is something that we're missing." Any day now.
In conclusion, I would recommend that you don't sell your shares in the supercomputing sector just yet.
Re:Huge Market for Supercomputers Will Come... (Score:1)
You've obviously never tried to open a spreadsheet in StarOffice.
About as innovative as a MS product... (Score:1)
Creating job classes in a homegeneous cluster is just as useful. I seem to remember someone working on The Collective project [uidaho.edu] at the University of Idaho was doing this with a genetic application. This cluster is pretty close to being homogeneous.
If you visit the site, the Borg penguins are my handiwork.
what kind of clustering interconnects (Score:2)
if I understand correctly... (Score:2, Interesting)
Really, really cool clustering (Score:2)
They also have a good comparison of clustering technology features on this slide. [bjbrew.org] For now, you need a shared SCSI disk that can run GFS or something similar, but it may be possible to hook in PVFS eventually for low-end stuff.
Basically, SSIC is like MOSIX, but with killer high availability features. If a node goes down (from hardware, OS, or application failure), its workload is seamlessly migrated to another, functioning node. On MOSIX, unfortunately, each process has a "home node." If the home node goes down, the process is dead. SSIC also does load balancing by process migration, and all of that good, high scalability stuff.
Anyways, just give a look, and check out their slideshow...
--JRZ
Imagine a ... (Score:1)
SGI's linux ccNUMA ... (Score:2)
Linux (Score:1)
Re:Linux (Score:1)
heh. wish I would have thought of that...
~Shane www.shanekinney.net [shanekinney.net]
Long Live