Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Linux Software

A New Approach To Linux Clusters 143

rkischuk writes: "InformationWeek has an article about a group of ex-Cray engineers working on a new architecture for clustering Linux systems. 'It's not easy trying to build scalable systems from commodity hardware designed for assembling desktop computers and small servers.' Per the article, 'As the number of CPUs in a Beowulf-style cluster-a group of PCs linked via Ethernet-increases and memory is distributed instead of shared, the efficiency of each processor drops as more are added,' but 'Unlimited's solution involves tailoring Linux running on each node in a cluster, rather than treating all the nodes as peers.'" Looks like Cray engineers think about clustering even when they're not at Cray.
This discussion has been archived. No new comments can be posted.

A New Approach To Linux Clusters

Comments Filter:
  • by xtermz ( 234073 ) on Monday August 13, 2001 @12:28PM (#2112329) Homepage Journal
    i have been thinking about for quite a while now. Beauwould clusters are all nice and well, but what about joe sixpack with some it background like me who wants to get some sort of cluster going on. what methods are available (be it a simplified beouwoulf cluster or whatever...) for the guy with 3 or 4 old machines who wants to waste some electricity and try his hand at clustering some machines. is it possible to do it without being a CS major, or is it just a matter of having enough time/resources...
    • Yes. Building a cluster is fairly easy if you know a little linux and some programming. Check out http://www.beowulf.org and some of the other sites about information. I am getting ready to build a 20 node cluster with pentium III 1Ghz processors. I am playing the waiting game right now with the University and shipping company however. If you don't mind writing your own distributed applications ( using CORBA or some other libriaries) then you can set up the cluster as four individual machines. Well actually if you have 4 then one will be the rserver node and 3 will be the slave nodes. the server node will need 2 ethernet cards. It will be the only node that connects to the outside world. The other nodes can then be connected by NFS. One peice of advice. Before you build a cluster you should first decide what you are bui;lding it for. Not all software can scale to parallel computing. You must first design your problem then build your cluster. Many cluster are build and run tailored to the problem they are solving. For instance the cluster I am building will be more like a network of Workstations then a beowulf. But for my particular problem it will work in the same way. There are alot of sites out there pertaining to beowulf clusters. You need some inux experiance and some hacker ethics, but it is dooable by anybody for sure. Have fun
    • Beowulf Clusters are not that hard to build. the only differnce between building an 18 node cluster and a 4 node cluster is the number of computers. If you have the hardware, some free time, and the patience, then setting one up is not that hard. The hard part is actually doing something with one. Programming in MPI and PVM is not an easy task even with a degree in CS (unless you have no life). Check out Beowulf.org [beowulf.org] for more info.

    • What is important to realize is that in order to use these boxes as a cluster, you will have to wrote you own custom software. Yep, it means C and C++, and hours of hacking.

      But as mentioned in a previous post, Mosix can do that for you, if and only if your program can use several instances at the same time. Compressing MP3s is a good example.
    • This link [mosix.org] describes how MOSIX can be best applied.

      The best solution for any distributed computing problem depends on that problem. How CPU intensive is the job? How much data will need to be distributed to nodes. Do intermediate processing steps require intermediate answers from other nodes? How fast is the CPU? How fast is the network?

      Basically, if you have a lot of processing that could be manually distributing to a bunch of hosts, via rsh, or rlogin, then MOSIX can be used to easily manage/monitor that work with no coding. For harder problems that couldn't be manually distributed you might need MPI or PVM with special code in order to do the equivalent of "threaded" distributed computing.

      Manual distribution is often easiest when you have network shared filesystems ( NSF, etc.,) and so is MOSIX.

      MPI is short for Message Passing Interface. You can use MPI libs to do interprocess/interhost messaging and I/O on non homogenous networks. MPI does not require shared filesystems, though your own project might use them. MPI is certainly easier to manage when you have shared filesystems. One must be careful to conider the I/O time involved in network read/writes. Also note that multiple network nodes will clobber each other's data if they all try to write to the same file over the network.

      MPI, or PVM is often used when the problem of breaking the job into pieces, or putting the results back together is non-trivial. For instance, if you were doing very processor intensive image processing on a large file, you may need to break the image into pieces, or tiles, and then distribute the processing of the tiles, with some processors sharing intermediate results, then stitch the results back into one file and finally write that file.

      The approach that Unlimited Scale is using only makes sense in limited cases, i.e.., when computers are " getting bogged down in processing interrupt requests from peripherals." In general, multithreaded processing, or even distributed processing, only makes sense when I/O time is dwarfed by CPU time. SMP machines have an advantage in that they have really fast, communication between nodes, compared to Ethernet. Beowulf clusters have relatively slow communication between nodes. Beowulf clusters can only really be effective in the more CPU bound and less I/O intensive jobs. But if you have a job that can be run faster on a Linux cluster, you can save the big bucks on your initial hardware purchase. The more CPU intesive the processing is, the slower the network you can put up with. SETI is a good example of this. If it takes 12 hours to process a packet of data, then I/O over a 56K modem is OK.

    • by baptiste ( 256004 ) <.mike. .at. .baptiste.us.> on Monday August 13, 2001 @12:33PM (#2144648) Homepage Journal
      Try a MOSIX Cluster [mosix.org] This type of cluster spreads processes out to the machine with the least load. A Beowulf can be done, but to take advantage of it, you have to run custom software that is capable of parallel processing.
      • Normally I just troll, but this is important.

        MOSIX works fucking awesome. I used it to compress MP3s. I ripped waves and stored them on my bad-ass machine. Then I ran a at daemon on my slowest machine and ran compression routines in the at q in batch mode. By running on the slowest machine in the group it guaranteed that the jobs would migrate to the faster machines in the group and the slowest machine remained as a "task manager". It would run instances until every machine was busy, then batch would hold jobs until a machine came free, then it would release and on with the show.

        It works EXTREMELY well. Try MOSIX for some serious fun.

  • These folks [linuxnetworx.com] are using standard rack mounts, fitting 5 standard ATX motherboards in 8u of rack space (no special motherboard needed). They mount them vertically which makes cooling much more efficient when piling large numbers of CPU's into a small space.

  • There were a few "Because Linux does not scale well with multiple servers" posts about why someone would use a mainframe as opposed to a Beowulf.

    Well, it looks like there are people working on the task. But that's not the real point, the right tool for the right job is the point. A whole lot of processes that do not require another process to finish before the next one is where Beowulfs shine, if you want throughput or process with dependancies then a mainframe is your best bet.

    But it's still nice to have an alternative for those of us who cannot afford a mainframe.

    • Is mainframe really the right term? I thought that mainframe usually refered to a IT machine like an IBM 390. One of my client's has a "mainframe" that is only 11 400 Mz processors. The Dell 8 way MSSQL server has more processing power. (and we have verified this with benchmarks and working with IBM.)

      Isn't there a differnt term for a very fast computer? Maybe something like "super computer"?

  • by LocalYokel ( 85558 ) on Monday August 13, 2001 @12:33PM (#2115939) Homepage Journal
    If you were plowing a field, which would you rather use?
    Two strong oxen or 1024 chickens?
    • by Anonymous Coward
      I'd like to see the chickens have a go at it.
    • If my neighbor just gave me 1024 free chickens
      because he upgradeded to two oxen, and I didn't
      like to eat chicken, perhaps getting them to
      plow a field would be useful?
    • Two strong oxen or 1024 chickens?

      I don't know. What's the tradeoff? Lots of chicken droppings on your field? The soil can use some nutrients, I'm sure. Can the chickens do it? Have you tried or is it a hunch of yours? Who eats more? The two oxen or the chickens? What about the sleeping place? Sure, those are a lot of chickens, but they don't have a problem if you sit them side to side and they don't have a problem with sleeping in three or four rows. Manageability, yes, that's a problem... now we see a real tradeoff. Being able to replace a chicken if it dies (and chickens are cheap) or one (possibly both) of your oxen, which are not cheap, versus manageability of the two oxen. Sure, feeding the chickens is also a problem, as well as collecting the eggs wow! byproducts! suddenly I can do something with the chickens that I couldn't do with the oxen.

      I like this analogy.

    • If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?

      I've also heard this attributed to one of Pyramid's VP's, while drawing comparisons to Sequent systems. Anyone remember Pyramid?


    • I once saw some celebrity charity thing where they had a bunch of kids (seemed like they were about 7 years old) compete in a tug-of-war with Lou Ferrigno (The Hulk on the old TV shows.) They had Lou just stand still, holding the rope, while they kept adding kids. Guess how many kids it took? Only about 10. I was quite surprised. The obvious strength advantage of an ox over chickens might not be as big as you think.

      If your task is just plowing a field, and you just had your standard plows, perhaps oxen would be the best solution. Have you ever tried hitching a bunch of chickens to a bunch of mini-plows? Might be worth a shot. Depending on how big a field, you could easily hire someone else to plow it for you and pay them in a couple of dozen chickens. Or rent a tractor for the price of some chickens.

      Overall, if my life didn't depend on getting that field plowed, I might choose the chickens just to see if I could come up with an innovative solution to the problem. It would be more fun than walking behind some oxen, breaking my back. Reminds me of the canonical story of the student given the task of measuring the height of a building with no tools other than a barometer. Seems like the more intellectually interesting solutions would come out of using the chickens.

      To sum it up, tilling a field with oxen sounds boring. Trying to achieve the same result with a 1000 chickens sound like fun (for a day or two, anyway.) Then, on to the next project. You want me to do what with a million monkeys?
      • by Anonymous Coward
        "You want me to do what with a million monkeys?"

        Have them take another crack at writing the DMCA. That first one didn't come out so well.

    • A permaculturalist [permaculture.org] would choose the chickens.
    • hmm good job field ploughing is not high on my list of computing tasks...
    • > If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?

      Either one makes for a fine bar-b-que, once the plowin's done.
    • If you were planting seeds in a field, which would you rather use?
      Two strong oxen or 1024 chickens?
    • Neither.
      1024 Oxen.

    • Somehow I can't see 1024 chickens all agreeing to go in the same direction at the same time.

    • If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?

      Well, if the chickens were $2 a piece, and the oxen were $25,000 per, and I only had $5,000 to my name, I'd have to say the chickens.
  • Check out SARA [www.sara.nl]: TERAS' is a 1024-CPU system consisting of two 512-CPU SGI Origin 3800 systems. This machine has a peak performance of 1 TFlops (1012 floating point operations) per second. The machine will be fitted with 500MHz R14000 CPUs organized in 256 4-CPU nodes and will possess 1 TByte of memory in total. 10 TByte of on-line storage and 100 TByte near-line StorageTek storage will be available. 'TERAS' will consist of 44 racks, 32 racks containing CPUs and routers, 8 I/O racks and 4 racks containing disks.

    The fun part: parts of this huge machine are running Linux :)

    For more closeup pictures see: http://unfix.org/news/sara/ [unfix.org]

    Ain't it sweeeeeeeeeeet?
  • Carve up a big mainframe with 24 or so processors and bind the OS to a few of them. Get another mainframe and use it as a mezzanine backplane to gang together other mainframes each with their OS images bound to specific processors. Run all networking and IO through their own processors to offload the CECs. Add some more ASICs to handle crypto. Toss all console activity off to another 'special' processor. Run a hypervisor over the whole shebang to control all of the guest images.

    Voila you've reinvented parallel sysplex with VM for Linux running on 'cheap' hardware.

    Except how cheap do they expect it to be?
  • Request for help (Score:5, Interesting)

    by tbo ( 35008 ) on Monday August 13, 2001 @12:37PM (#2117017) Journal
    I'm trying to design a specialized data-fitting program to be used for accelerator-based condensed matter physics (and maybe ultimately other branches of science as well). I need information on adding clustering support to this program. Here's a brief description of what the program does:

    The user writes a small chunk of code that calculates the function they're trying to fit the data to. We require the user to code the function him/herself because speed is important, and some of these functions are too difficult for Mathematica or the like to fit. Once the user writes their function, it's linked (dynamically) with the rest of the code. The user then passes in a parameter file, and away it goes.

    Many of these fits can take days, and, since they often have to be repeated many times with slight changes to the fitted function or initial parameters, this is a serious concern.

    Can this new approach to Linux clusters be used here? We have tons of Linux boxes lying around that are being used for other things, but have lots and lots of spare cycles. We probably couldn't afford a dedicated processing farm, but we could easily live with something like distributed.net where the program transparently takes all the spare cycles.

    I know the problem is parallelizable, since each node can calculate the value of the function at a few of the data points, then send back to the "master" the chi-squared contribution of those points. Each iteration of the fitting process, the master sends out the current parameter values, and then the nodes grind away... There's not too much communication required.

    One of my big concerns is how to get the user-written function from the "master" computer to all the "slaves". It's unrealistic to expect the user to manually install it on all the machines each time something in the function gets tweaked and it's recompiled. Are there pre-existing standards on how to send code to nodes in a cluster, then have it executed?

    Any advice or pointers to good starting places on distributed computing would be much appreciated.

    BTW, as a hint to all the other comp sci geeks out there--physics is a great place to find new and challenging computing problems (I'm not claiming this is one). In particular, the particle physics people often have to deal with spectacular data rates, and do extremely complicated event reconstruction. Check it out some time.
    • Re:Request for help (Score:2, Informative)

      by laertes ( 4218 )
      The most obvious solution is to use some sort of byte code, but you said that speed was an issue. If you're using Linux, you might want to look into dl_open, a library call. Dl_open lets you load dynamically linkable libraries at run time.

      I would imagine you would use this as follows; first, you'd get some data points with which to calculate from a server. Then, you'd also get the name of a shared library which is on a server (NFS mounted probably). This library has a function name 'calc' or some such that does that calculation. You can then call that function, and post the results somewhere.

      I would avoid using MPI or PVM, since those are not designed for farming out data the way you are. You should probably use your own job control protocol. Also, you might want to allow for multiple archetectures, naming the library foo-0.0.1.i386.so and foo-0.0.1.alpha.so and so forth,

    • Re:Request for help (Score:5, Informative)

      by san ( 6716 ) on Monday August 13, 2001 @12:48PM (#2128822)

      The normal way to operate a cluster is to have a shared (NFS) file system across all the systems, thereby solving the data distribution problem (please note though that this prevents you from doing too much file base IO because it's too slow, you might want to make a local /scratch directory on each node)

      Besides the NFS share you'll need some kind of parallel programming library like MPI or pvm, and a job scheduler of some sorts. The libraries you can find on the web (maybe in precompiler RPMS, look for the mpich MPI implementation for a start), and will provide you with a programming framework for doing all the networking and setting up the topology. The scheduler can be as simple as the one provided with MPI/pvm (ie. you name a few hosts and your job gets run on those), or, if there's a number of people accessing the cluster at the same time, you might want to try a real queuer (like gridware [sun.com]).

      The parallellization is something you'll have to do yourself and it's the hardest part of clustering.

      Hope this helps :-)

    • Re:Request for help (Score:1, Informative)

      by Anonymous Coward
      Check out http://www.cs.wisc.edu/condor/mw/
    • Re:Request for help (Score:2, Informative)

      by mj6798 ( 514047 )
      Can this new approach to Linux clusters be used here?

      Use PVM or MPI. Both exist prepackaged for most major Linux distributions.

      I'm trying to design a specialized data-fitting program to be used for accelerator-based condensed matter physics.

      I think this may be a case where a bit more thinking and literature research about the problem would help a great deal. People solve extremely complex data fitting problems on modern PCs without the need for parallel processing, and there are very sophisticated algorithms for doing this. You should probably talk to local experts in statistics, computer science, and pattern recognition.

      • I think this may be a case where a bit more thinking and literature research about the problem would help a great deal.

        We're looking at using MINUIT [wwwinfo.cern.ch], a package written by the computing divsion at CERN [welcome.cern.ch], as our fitting engine. MINUIT's algorithms are quite advanced, and it's commonly recognized within the physics community as the best general-purpose fitting package out there.

        I think you may not realize how complicated the functions we're trying to fit are. Here's the quick and simple version: we study magnetic fields within superconductors and semiconductors on a microscopic level. We do this by using spin-polarized muons or radioactive light ions as a probe, and measuring anisotropy of the emitted decay products. That data then has to be compared against complex models of superconductivity. The computationally expensive part here is calculating the values predicted by the model for a given set of parameters. This has to be done once for each data point to calculate chi-squared, and repeated many times (once in each iteration of the fitting process), each time with different parameters. The models typically contain difficult integrals which must be evaluated numerically thousands of times with very high precision.

        Since the function we're trying to fit changes fairly often depending on the sample and measurement techniques used, it's not practical for us to spend huge amounts of time optimizing each individual function to be fitted. The fitting package is already optimized, so the only thing left is to parallelize it.
    • Re:Request for help (Score:1, Informative)

      by Anonymous Coward
      Divide the number of data points by nodes on your network. Associate each data point range with IP's of the available machines. Set it up so when the program runs, it will check the IP of the computer that's running it and process the approriate datapoint.

      Set up nfs and then write a bash script that will execute whatever gets sent there.

      When you wanted to run something, you would just copy it to the server nfs directory and all the nodes would process their respective datapoints, automatically.

    • Re:Request for help (Score:1, Interesting)

      by Anonymous Coward
      This problem is trivially easy if you use any of the well developed architechtures for distributed parallel computing. MPI is the most popular, PVM was, but is going out of style. MPI is a cluster based interprocess communication system + remote program invocation. Look at the websites of the major open source implementations of the MPI standard, MPICH (from Argonne Natl. Labs) or LAM/MPI (Notre Dame Uni.) for more information. There's a ton of tutorials and on-line training courses online. -Colin
    • This article [byte.com] covers three distributed OS options, with some intro explination of the difficulties. I would think the easiest (not necessarily the best) solution could be to use Mosix (listed last in the article) and thread your application to a logical extent. Mosix won't interfere with your current linux boxes, just add it on. The tasks will automatically be load-balanced among the machines.
    • Many of these fits can take days, and, since they often have to be repeated many times with slight changes to the fitted function or initial parameters, this is a serious concern.

      From the Beowulf FAQ [dnaco.net]:

      3. Can I take my software and run it on a Beowulf and have it go faster?

      Maybe, if you put some work into it. You need to split it into
      parallel tasks that communicate using MPI or PVM or network sockets or
      SysV IPC. Then you need to recompile it.

      Or, as Greg Lindahl points out, if you just want to run the same
      program a few thousand times with different input files, a shell script
      will suffice.

  • 'As the number of CPUs in a Beowulf-style cluster-a group of PCs linked via Ethernet-increases and memory is distributed instead of shared, the efficiency of each processor drops as more are added,'
    Does that mean you could reach a point where adding another node actually slows everything down?

    • Does that mean you could reach a point where adding another node actually slows everything down?

      Kind of. You just start blindly adding nodes to a cluster, efficiency drops as the processes are deadlocked more and more, waiting for that relevant network traffic. You may not ever reach negative efficiency, but you will get no gains, therefore, just burning electricity.

      Speaking of burning electricity, I wonder if I should start back up implementation of my crazy idea of a PC/Mac/Sun/NeXT/SGI/Alpha/VAX/RS6000 cluster...

    • Does that mean you could reach a point where adding another node actually slows everything down?

      Sure. Just take one example. Imagine a problem that requires the cluster to munch on a large set of data. The initial set up of the problem requires that all the processors get their piece(s) of data. Maybe this is done via an RPC style mechanism, or by reading from a common file share or any number of other techniuqes. Doesn't matter. That initial set up takes up network resources.

      At some point, the network (or the shared disk, or something) will become a bottleneck. When it takes more time to get the data through that bottleneck than it takes for the processing to actually complete, then you've reached a point of negative returns: adding nodes decreases performance.

    • I'm not sure, but I heard that this is about 128 CPU's. In that case the overhead is so big, that adding another CPU would only slow the whole system down.

      They discovered this when designing computer algorithmns for parallel systems. So even when there's no Operating system running the CPU's - only the program (ie "Occam" like), the overall overhead needed for the CPU's to cooperate, and safe concurrency, simply becomes too big.

      Unimaginable? What's the deal:

      * amount of productive CPU-cycles for overhead (Po) per second
      * amount of improductive CPU-cycles for synchronizing waits (Pw) (waits for other CPU's, waits for communication bus, wait for memory access etc. -> concurrency cycles) per second

      * amount of real CPU-cycles per second (Rc)
      * amount of application productive CPU-cycles (Pa) per second

      Now put those into a formula, and suppose that Cs is a constant... or a slow changing constant:

      Total overhead cycles: Toc=(c1*N*Po)+(c2*N*Pw) (where N is number of CPU's, c1 and c2 some constants...)

      Total application productive cycles per CPU, per second: Pa = Rc - Toc

      The formula's aren't probably like this, but I'm trying to give an indication. From "Toc" we now know that Pa reduces (above lineairly - probably not true in real life) with every added CPU.

  • by Anonymous Coward
    How in general is this different than the approach taken by Plan 9? (http://plan9.bell-labs.com/sys/doc/9.html)
    • Offtopic, but still near-and-dear to the hearts of many readers: How does the licensing for Plan 9 work? I saw a copy of the GPL [bell-labs.com] listed as Exhibit A at the very bottom of their own licensing page.

      The licensing looks liberal enough (it's labelled "PLAN 9 OPEN SOURCE LICENSE AGREEMENT" license), but how is it related to the GPL?

  • by Mtgman ( 195502 ) on Monday August 13, 2001 @12:30PM (#2123896)
    Treating each node as a peer! Don't they know that Peer to Peer networks are stealing from our musicians [riaa.com] and corrupting our youth! [slashdot.org] I just hope they can repent before the heavy hand of justice [bsa.org] comse down on them.

  • It seems to me that they are just using asymmetric multiprocessing on a distributed Linux cluster. (Asymmetric meaning that each CPU has a specific function.) This idea is certainly nothing new, though the novelty might be in using tailored Linux instead of the typical UNIX environment. (Different processors, typically.) Of course, this is an area that has been the focus of a great deal of research, so perhaps they are just attempting to move this idea into the private sector by using the less expensive Linux distros as a viable economic alternative for greater computational power?

    Root DOWN
    grep what -i sed?
  • I'm wondering if the rest of the engineers groan when one of them takes a look at a computer and says:


    (clears throat)


    "Imagine a beowulf cluster of these things!"


  • "Looks like Cray engineers think about clustering even when they're not at Cray."

    Well, duh, Cray machines are massively parallel processing machines, so they're not clusters in the sense that they don't use network cards and separate computers as basic computing units, <OVERSIMPLIFICATION>the processors talk to each other on the same bus and share the same memory</OVERSIMPLIFICATION>, but basically in either case it's about parallel processing. I *hope* Cray engineers think about clusters. I'd hate to see them think about single Athlon supercomputers ...

    • Not So fast.

      The CRAY 1, 2, XMP, etc etc are all VECTOR machines. Some of them happen to be parallel vector machines (multiple VECTOR processors)

      The T3 series are the MPP boxes. Cray's bread and butter was VECTOR machines though. MPP came about because some problems aren't easily vectorizable (but can run on MPPs, oddly enough).

  • Gnutella parallel... (Score:2, Interesting)

    by Saeger ( 456549 )
    Sounds to me like they've rediscovered the concept of a supernode where it's acknowledged that not all peers are created equal.

    (I know--not the best analogy)

  • The real limitation of Beowolf style computing is RAM. Beowolf is great if you have programs that paralellize with little intercommunication and low RAM usage. The bigger problem is RAM. Big iron like Crays/SUNs/SGIs all have about a Terabyte of RAM in one place. When you are trying to do large physics calculations you usually have a huge data set you need to store for every time series. Supercomputers aren't cool just because they are fast, but because they can hold HUGE amounts of data in RAM for easy acess. Until PCs get a few gigs of RAM per box cluster computing is still going to be Kludgy no matter what kind of message passing scheme you use.
  • Would it be concievably possible to use mixed architectures and assign certain tasks or routines to the architecture best suited for them. Rough example: 200MHz Pentium and 200MHz Cyrix system in the same cluster. Two calculations need to be performed, one interger, one floating point. Send the interger to the Cyrix and the floating point to the Intel. Rather than fight with some platforms being better than others at certain tasks, work WITH that fact.
  • by HRH King Lerxst ( 79427 ) on Monday August 13, 2001 @12:34PM (#2141104)
    I like the idea of a furry cluster: furbeowulf [trygve.com].
  • SJVN commentary [byte.com] on distributed computing and some interviews [byte.com] with various people in the field.
  • From the article:
    The idea is to free some computers from getting bogged down in processing interrupt requests from peripherals, while letting a second set of machines run the full operating system, furnishing the cluster with networking, job scheduling, input/output, and other capabilities.
    The central design theme of the CDC 6400 was exactly this, and it is a product of the mid sixties. In that incarnation, the two central CPUs ran only user applications, while the operating system, with all its interrupts, OS code, and device drivers, would reside nearby in the ten Peripheral CPUs (called PPUs) provided for this purpose. The central CPUs didn't even have an interrupt capability.

    Guess who the CDC6400 designer was? Seymour Cray.

    • > In that incarnation, the two central CPUs ran only user applications, while the operating system, with all its interrupts, OS code, and device drivers, would reside nearby in the ten Peripheral CPUs (called PPUs) provided for this purpose.

      When I heard a CS professor talk about putting multiple CPUs on a single chip, I suggested that one of the CPUs should be dedicated to the OS, which would mean that it wouldn't even need a FP unit. So for (say) 4 or 8 computers on a chip, one would be a "OS server" and the others would be "application servers". Ditching the FP on the "OS server" might allow an extra-high-performance design for it. And the others would only need context switches when the OS demanded it, rather than one for every stinking interrupt that came along.
      • (CDC 6600) ten Peripheral CPUs (called PPUs)

        There was only one PPU on the CDC 6600, equipped with a hardware multiprogramming unit to make it look like ten independent CPUs. The PPU had ten sets of machine state and I/O channels, but only one arithmetic-logic unit (ALU). This was back when the ALU was the expensive part of the CPU.

        one of the CPUs should be dedicated to the OS

        MacOS 8 on multiprocessors was the last major commercial incarnation of that idea. But that was an ugly hack to put multiprocessing on a uniprocessor OS.

        There's a lot to be said for channelized I/O, like mainframes have. All the peripherals look roughly similar to the OS, security is better because peripherals can't write all over memory, and there's less diversity in drivers. Intel tried this, but ran into troubles because they made the channel controllers fully programmable and put a little OS under Windows to run them. Microsoft hates it when you put stuff under their OS.

  • Wouldn't HURD be more suitable for such task?
  • by TechnoVooDooDaddy ( 470187 ) on Monday August 13, 2001 @12:26PM (#2142281) Homepage
    Cray's engineers seem always willing to consider every possibility, whether it be clusters, p2p, parallel, etc.. showing us that they're considering things well outside of what they're currently offering is also showing us why they're still in the game and even ahead in serious computing power after so many years.. IBM, Sun, etc.. have had their rise and falls, but Cray is always mentioned with reverance...
    • These were Ex-Cray, which perhaps says something in and of itself...
    • Their homepage seems to be http://www.unlimitedscale.com/ [unlimitedscale.com].
      Unfortunately, it contains absolutely no info on what hey are up to.
      groups.google has a tiny bit more [google.com].
      And a bit on their funding [localbusiness.com].

      Anybody got any more info?
    • Of the top 500 supercomputers in the world, 47 are vector processor machines - the kind of processors that Cray became famous for.

      Not one single of these are made in the U.S.

      Cray today is a name. It's a brand. It's not a manufacturer of high performance computers.

      Just for the record, IBM produced the two fastest computers currently, Intel the third, IBM the fourth, Hitachi the fifth, SGI, IBM, NEC, IBM, IBM, and Finally, number *11* is a Cray based on the Alpha processor (the T3E).

      So, tell me again, who was playing catch-up with who ?
      • The Top 500 is not a list of who makes the best machines. It's a list of what real world installations that run LINPAC the best. LINPAC is a benchmark, not a real piece of code. The T3E holds the record for *sustained* performance on real world code. That, in my opinion, and probably the opinions of the people still buying them, is more valuable than any benchmark. The "systems" on the list above the T3E in question are all strongly connected clusters, not single system image machines. Well, the Hitachi and NEC boxes are probably is an SSI. That's not saying they are bad, mind you, just that they aren't single machines. There are still some codes that run best on vector based SMP systems and for those codes, you buy a Cray. Also, for MPI code that communicates *a lot* between nodes, the T3E will run rings around a cluster.

        Just to sumarise my basic point: The Top 500 is a benchmark and is not necesarily a good indication of who makes better computers than who.

        • LINPACK is computationally O(n^3) and O(n^2) wrt. communications, for problem size n.

          That's not a completely unfair benchmark - but of course you're right it's a benchmark and therefore it does not cover every possible problem out there. However, it is based on the common linear-algebra routines that are the core of a very large part of the scientific computing problems being run out there.

      • and it's not the top of the line, even, if i'm not mistaken. i thought that was the t90.
  • As long as Beowulf clusters have been around people have been doing this. In a homebrew system made from varying types and qualities of hardware, are you seriously going to have each node doing the exactly the same task? No... you write your program (and Beowulfs are ALL in the programming) so that each node does what it's best suited for. The node with the big hard drive stores the data, the fast machine gets twice as many work units, the slow machine is devoted to taking user input or receving the end result, etc. To do otherwise would be, well, stupid. The weak link in the system would slow the whole thing down.

    Creating job classes in a homegeneous cluster is just as useful. I seem to remember someone working on The Collective project [uidaho.edu] at the University of Idaho was doing this with a genetic application. This cluster is pretty close to being homogeneous.

    If you visit the site, the Borg penguins are my handiwork. :)
  • I am curious about where clustering will go for the connection between nodes. Ethernet, Fibre Channel and various proprietary formats are around but all have issues. InfiniBand is also on the horizon. While I work with InfiniBand development, I am not involved with any kind of clustering work. I see Sockets over InfiniBand as interesting method for inter-node communication. What do those of you who do work in the field think?
  • what these guys want to do is to build, say, a cluster of 2 CPU system where one of the CPUs only computes while the other manages I/O and communications. Indeed, the I/O part is really a problem on Beowulves, and dedicating a CPU on it and communication can be cheaper than dedicated network cards like Myrinet (at 1000 $/port) or SCI, and hi-perf I/O like HiPPi. I wonder though if they can beat the price/performance ratio of the latter the way Beowulves beat on raw Flops the ones of traditional supercomputers.
  • If you're interested in general-purpose clustering (i.e. you don't want to re-write all your apps to use MPI), I really suggest checking out Compaq's Single System Image Clustering (SSIC)project [bjbrew.org]. For Linux, this is basically in a pre-Alpha state, but the older, UnixWare-based version was very strong.

    They also have a good comparison of clustering technology features on this slide. [bjbrew.org] For now, you need a shared SCSI disk that can run GFS or something similar, but it may be possible to hook in PVFS eventually for low-end stuff.

    Basically, SSIC is like MOSIX, but with killer high availability features. If a node goes down (from hardware, OS, or application failure), its workload is seamlessly migrated to another, functioning node. On MOSIX, unfortunately, each process has a "home node." If the home node goes down, the process is dead. SSIC also does load balancing by process migration, and all of that good, high scalability stuff.

    Anyways, just give a look, and check out their slideshow...

  • Cray cluster of (Thing)? is that what we need to say now?
  • ... project is already doing it. See their Linux Scalability Project [sgi.com].
  • linux/unix os' rule

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford