

Linux Clusters Explained 53
tramm writes: "As someone who works on massively parallel Linux clusters everyday,
I get tired of explaining why it is not 'Just another Beowulf'.
Linux World has a good article on the four major types of Linux clusters. Our work is in supporting scientific codes that have a high degree of communication. This requires a very different system from the standard Beowulf-class machines that excel at the 'embarrasingly parallel' codes that do not require as much communication. The cost of the network interconnect for a high performance cluster is vastly more than that of a generic 100base-T system."
Yup. (Score:4)
No Linux clustering project will ever reach the performance of such systems (though some of them might eventually run Linux), but the low-end high performance computing market (yes, I know it sounds oxymoronish) is bound to be taken by Linux.
Clustering (Score:3)
Well, anyway.. "clustering" can mean just about anything to anyone, given the context. A system administrator probably thinks clustering is taking the same network resource and distributing it across several machines (which typically appear on the network as one logical object). A physics major might think clustering is a huge room full of SGIs all doing rendering and particle analysis... and an anonymous coward might think that clustering allows them to download pictures of natalie portman pouring hot grits down her pants. The point is that it means different things to different people.
Boiled down to the basics, a cluster is simply a group of machines working towards a common goal (whether it be filesharing, parallel computations, or whatnot). There may only be 4 types now, but next week, there'll be 6.
Strategy (Score:3)
In the past, you had to shell out beaucoup bucks to just get out of the shoot. With a Beowulf strategy, you can make hardware improvements in concert with different software approaches to achieve an optimal price/performance approach.
IBM SP Switch Adapter (Score:1)
They didn't talk about... (Score:5)
Nerd Clusters, which are more widespread than Beowulf, and more scalable.
For example, a Nerd Cluster, using ChineseTakeOut messaging, are often used in last-minute, panic-striken Intranet roll-outs, yet each node of a Nerd Cluster can answer simple management questions such as, "Hey, my PC at home crashes all the time. How can I fix it?"
Nerd clusters are, however, more dangerous to operate. If, for example, you say "Let's migrate our core applications from Solaris to NT", you run the risk of massive memory leakage as individual Nerd-nodes began to prioritize jobs such as "update_resume" over your request queue.
Nerd clusters need a "master" node as well. These can generally be identified by their bushy beards, or a long string of nodes queueing up to beg for static IPs.
Step 1 of Big Computer Creation.... (Score:3)
If i was on one of these teams,
the beast would be called...
COCK
Centralized
Organizational
Compuational
Kluster.
A leading female computer scientist was over heard saying, "My, what a big COCK you have!"
"University of ________ researches have built the world's largest COCK."
"Some bugs were found in the COCK today."
It could go on for days.
Be thankful you are not my student. You would not get a high grade for such a design
Re:Yup. (Score:2)
Sure.... (Score:1)
Can you run Beowulf on them? `8r)
--
Gonzo Granzeau
They forgot Condor... (Score:5)
You don't need to have a dedicated cluster - Condor started life as a scavenger of idle workstations. We run Condor on every workstation here at CS, and routinely recover several thousand CPU-hours a day that otherwise would have been wasted. You can configure Condor to run with any policy you want on a per-workstation level - only run jobs at night, only run jobs from this group, only run jobs if the wind is blowing from the west - whatever makes sense to the workstation's owner.
Best of all, we're free-as-in-beer.
If you have any questions, send us mail at condor-admin@cs.wisc.edu [mailto]
Re:Clustering (Score:1)
Well, to me, it means any team activity (usually as part of employment) that results in completely botched results.
Example: "My ISP really clustered on that DNS switchover."
Re:Clustering (Score:1)
Pretty soon the fridge will have enough power to browse the web - but where are you going to encode your home video dvd's etc. You wouldn't want a big cube under your desk it would either be in the basement or distributed around the house.
Maybe an thin beowulf client will get into the kernel (I'm not into the technique) and linux will be the answer, or Micro$ofts research on distributed computing will make it.
Whatever the future holds take care of the good thing in UN*X like X-windows on the server in the cellar and don't let fancy desktops bring in anything that binds you to your machine instead of being free to grab any spare resource on the network.
BTW. Any expert out there know how I can start a shell session on my server, log out of my X-terminal without quitting it and be able to check out the progress next time I logon.
Re:Yup. (Score:3)
Re:Yup. (Score:2)
Re:IBM SP Switch Adapter (Score:2)
However, IBM has said that the PCI bus is too limiting and the next generation switch will have a special interface (probably right on the CPU bus).
The most... (Score:1)
Re:IBM SP Switch Adapter (Score:1)
Re:BEOWOLF IS A CHARACTER FROM 'THE HOBBIT'!!!!!11 (Score:1)
Nice to see... (Score:3)
SIGHYPOCRISY received: Dumping core
panic: Hypocrisy error in SIMM 0x0B
panic: Hypocrisy error in SIMM 0x0B
panic: Hypocrisy error in SIMM 0x0B
panic: Hypocrisy error in SIMM 0x0B
Syncing filesystems... [11][8][6][3.14159][0][0][0][0][0][0][0][0][-1]
System Halted
Press any key to reboot
Re:Clustering (Score:2)
Re:BEOWOLF IS A CHARACTER FROM 'THE HOBBIT'!!!!!11 (Score:1)
Re:Clustering (Score:1)
Re:Yup. (Score:3)
No Linux clustering project will ever reach the performance of such systems (though some of them might eventually run Linux),
With standard Wintel hardware, agreed. I predict that the hardware will migrate to comodity componants as the off the shelf cluster becomes more popular. In many supercomputers, the CPUs are nothing special, it's all in the interconnect hardware. The real bottleneck at the moment is the PCI bus. However, 64 bit 66 Mhz busses are becoming a bit more common now. The rollout of 64 bit CPUs will speed that along. AGP shows some promise. Clearly, the industry is realizing that 33 Mhz 32 bit PCI isn't fast enough. Bridge chips are getting smarter. The next step is to replace the bus with a crossbar switch driven by the bridge chips and providing backwards compatability for existing PCI devices. I also wouldn't be surprised to see gigabit serial devices connected to the FSB soon.
At that point, the gap between custom supercomputing hardware and commodity clusters will be much smaller. Every time the interconnect gets faster and lower latency, the problem set that can be handled by commodity clusters expands.
"Clustering" as more then name (Score:1)
But beowulf is not really clustering technology. It is not really a technology at all - it is simpy a buzzword. Beowulf (for example the 'Extreme linux' distribution from NASA) is simply plain linux with some user-space programming libraries like PVM and MPI, which are developed elsewhere. Networking a couple of computers and running PVM on them is nothing new. Networks of Sun sparcs were common long before someone heard of linux.
True clustering technology should be integrated at the OS level. The OS must me aware that it is running in a cluster enviroment, and deal with this in appropriate ways, such as balancing processes over the entire cluster.
For example, using "clustering" techonlogy like beowulf (PVM), processes get allocated staticaly on each node in a round robin fashion. If more then one user is running on the cluster or the cluster is hetrogenous - different speeds and memory sized, PVM will give far from optimal performance. If some node will begin thrashing, i.e. it fills up all its physical memory, PVM will not move some of the processes in the node so that it will stop.
The only technology currently avaialable that does all this job in linux transparently to the user is MOSIX (http://www.mosix.org). It integrates into the linux kernel and provides transparent process migration for load and memory balancing.
This means that you can both run PVM and legacy applications on a MOSIX/Linux cluster and achieve optimal performance.
Oh great! (Score:1)
Now we're going to have 12 other types of clusters posted by trolls!
Can you imagine... Jessica 2 cluster?
sheesh :-)
What about a Distibuted computer ? (Score:2)
I want a system that allows every CPU on my network to be available to every process on the network as if it were another CPU in the same machine. Sort of Distributed Multiprocessing.
I want a system where I can add a HD to any machine on my LAN and have that added into a single pool of diskspace much like multiple drives are attached to the root filesystem in Unix.
Then in this model every computer consists of a "CPU server" a "Disk server" and a terminal with attached peripherals, like keyboard, mouse, scanner, joystick etc. The whole network is literally a single high availability computer.
If I set fire to a particular box then the computer/network just doesn't use those resources anymore. When I replace that box the whole network/computer is faster/has more space seamlessly.
Every application sees a "simple" multitasking environment. It tries to execute on the local node. (i.e. the one the terminal is connected to) and draws resources as needed from any other nodes.
The entire thing should be asymetric so if I try to run Quake on a 386, it just runs out to the network right away and uses additional CPU's to run possibly even assigning the whole process to another more capable CPU like the athlon in the next room.
Re:Clustering (Score:2)
I routinely run non-X, non-interactive apps thus:
nohup prog >output 2>&1 &
Later when I log in, I
tail output
YMMV on C shell.
Do note that some versions of AIX have a bug in ksh which prevents a proper nohup and you should exec ksh first.
Regards,
Dean
Re:What about a Distibuted computer ? (Score:1)
Re:Oh great! (Score:1)
I immediately thought of Slashdot!
Data Warehousing in a cluster? (Score:1)
Where I work, there has been a nightmarish endeavor to implement a data warehouse (oodles and oodles of sales history data stored along a variety of dimensions, available for quick access and reporting). I haven't been involved in this project, but I understand that the technical challenges have been daunting, given a reasonable financial limit. One package simply failed to accomodate the amount of data once heavy loads were run, while another failed to process nightly updates within a specified timeframe.
My question is, would one of these clustering technologies be applicable to a data warehouse? I think of the 100 or so PC's that sit around the office doing nothing overnight, and wonder if an investment in some extra disc and some elbow grease might give us at least some functionality while handling that massive amount of data. Just a thought...
The ix86 based Netfinity range have a switch (Score:1)
God I love that network hardware. Another nice technology is SCI from Dolphin (http://www.dolphinics.com/)
Also forgot Eddie (Score:1)
Anyway, they also forgot about Eddie [eddieware.org] which is mainly designed for redundancy and load balancing for web servers.
Re:Yup. (Score:2)
WHY do you want to keep backwards compatibility, pray tell?
Because it's a gimme. The hardware for PCI already exists, and there are bridge chips right now that can just about fit the bill. The cost of going that way is a deciding factor (it's a reletivly cheap solution). It helps to avoid the chicken and egg problem:
Cluster users represent a small minority of PC purchaces. Servers are a much bigger market, but the price/performance would have to be comperable to the current server PCs to sell there. A manufacturer has a much better chance of a decent sized market by incrementally improving on existing standards while maintaining backward compatability. That's the whole point of commodity clustering.
Not that easy. (Score:1)
Good in theory, good in practice (Score:4)
We explored Beowulf, but after talking to those that are in the know, Maya's tile renderer is not well suited to a Beowulf system.
I looked at other solutions as well, but due to shared memory and the network bottle neck, nothing could take what we saw as a distributed system and turn it in to a parallel system.
By using a load balancing cluster, we are given the opertunity to render multiple frames at the same time, giving us a speed advanteage. This uses more overall memory than a massively parallel beowulf cluster, but it keeps the speed gain of a parallel system the same. The overhead exists for scene file loading becuase that is done on every machine, but it takes minutes when rendering takes hours. A fair trade.
The distributed system needs horsepower and memory more than network speed or file system speed. It is true that an increase in those will speed up the process, but the money is better spent in CPU and mem concerns. Our systems are all dual 600 mhz with a gig of ram per box. It may seem extreme but from our SGI render benchmarking, the scenes that we render can take over 500-600 mb of system memory.
Is it worth the cost?
We are taking our current render system of SGI boxes, which currently are used as desktops durring the day and render boxes at night and adding full time render boxes as well. The cost comparison of a linux render box can be seen in the hardware price alone. We are using these linux boxes to keep par with boxes that cost at least 3x's as much.
The only disadvantage is that the linux boxes can not be rolled out to desktop systems when new hires arive, where as the SGI boxes can. This is due to Maya's modeler being SGI/NT only and our support of Maya on the SGI only.
All in all, in our situation, a linux cluster is a God send, allowing me to have more horsepower and to allow the company to save money.
Only 3 kinds of clusters. (Score:1)
Re:Clustering (Score:2)
"A cluster is a group of systems, bound together into a common resource pool. A given task, whether that task is a web server, mathematical calculation, or robotic cooking widget is able to be properly and arbitrarily executed on any of the member nodes within the cluster. (This does not imply that it can be run concurrently or in parallel on multiple nodes)
To the 'outside world', or the entity using the cluster, the cluster appears as a single object. That is to say, the cluster has the image of a single system. (single state image)"
I sometimes think an accurate and useful definition of 'cluster' is one of the holy grails of this industry. It's about as overloaded a term as the word 'stuff'.
Aaron McKee
Clustering Products Manager
TurboLinux, Inc.
OS Independant Clustering: Global Layer Unix (Score:2)
There were two design goals: operating system independance and program transparency. You could install and run the GLU daemons and programs on any system and programs would be able to run on any node in he cluster without knowing about it (of course, for special GLU features there was a library of routine to use).
The GLU project purposely didn't make any kernel modifications to help in portability. A few of the research papers gave ways to modify the kernel to help performance or feature set, but that wasn't the goal.
GLU also supports another Berkeley project called Split-C, which is a parallel extension of the C language. Use this and you do not need to use the GLU library, gcc will generate appropriate code (I think that is how it worked?).
http://now.cs.berkeley.edu/Glunix/glunix.html
http://now.cs.berkeley.edu
Bandwidth (Score:2)
> well the different network components work together," said Brian Valentine, senior vice
> president of the Windows Division at Microsoft.
It is clear to me that Microsoft and I do not share the same uplink provider...
Re:Step 1 of Big Computer Creation.... (Score:1)
ya know i was talking to one of the guys who worked on ASP from Microsoft, originally it was going to be called "Active Server Scripts". but the acronym, "ASS" lead to a slight change.
-Jon
Linux and HPC (Score:1)
Yes, but it depends on the software (Score:2)
For instance, Deja uses a large cluster of Linux machines to index their News feed.
Their software is custom written, though (Including the databases)
It's the same situation at most search engines - Google uses multiple machines for indexing and searching.
It's going to be a problem finding a commericial database that will allow you to distribute it over multiple Linux machines, though.
Re:They didn't talk about... (Score:1)
BigIdea.com (Score:1)
Re:What about a Distibuted computer ? (Score:1)
i'm not an expert, but what we did was give multiple nics to each machine.... using multiple switches and multiple nics, you can get as many machines as you want, we currently have 66, and they act as one machine...
Re:They forgot Condor... (Score:2)
Yes, Condor is great - We used it for balancing batch jobs on a 8-node cluster (16 CPU:s), to make use of the extra cycles, when it isn't busy running parallel jobs (it's an SCI cluster, 2x4 mesh, with software from Scali [scali.com]).
However, we had to give up Condor, and now we use PBS instead (which sucks in comparison), mainly because of two things - Firstly, the whole libc6 issue; We needed the 2.2 kernel and glibc for the SMP support, but it took a long time before Condor supported glibc.
And then, when the libc6 version finally came out, we found that the client damons couldn't make contact with the master node - The master bound to the wrong Ethernet interface (130.x.x.x), and I just couldn't make it talk to the other interface instead (a private subnet, at 192.168.1.x). I even tried IPChaining the UDP ports, which didn't work (the packets got in, but they never got out). The 2.0 kernel seems to have been more forgiving - Traffic to one interface was let through to a daemon bound to the other. That seems not to be the case anymore.
Now, since I have you on the line, do you know if there is a solution for this? We would like to switch back to Condor, if possible. But if the clients can't talk to the master, it isn't very useful...
Re:Clustering (Score:1)
many shells in one console / xterm / telnet
session / whatever, and switch between them when
logged in.
Before you log out, detatch the screen session.
All the shells will continue to run, and when you
log back in, you can reconnect to screen and
continue as if nothing had happened.
screen rocks
Regards,
Tim.
Re:Data Warehousing in a cluster? (Score:2)
For that kind of work, you probably don't want a cluster, but a Sun E6500 [sun.com] or a similar machine instead. They are built specifically for buisness applications, like huge databases and that sort of thing. Clusters are generally useful for scientific computing, which is another thing.
Couple of points (Score:2)
*sigh* But it was not to be. Dolphin ran up against compatibility and performance problems with PCI chipsets before anyone else did, and the drivers weren't stabilized soon enough, and they never really figured out who their market was, and they made the major mistake of being honest with customers while competitors were bullshitting about stuff as though they actually had it ready to ship when in fact it was barely even on the drawing boards. In the end the liars and cheats got the mind and market share, and Dolphin is barely eking out an existence nowadays.
I also take issue with the following from the article:
>HA clusters may perform load-balancing, but systems typically just keep the secondary servers idle while the primary server runs the jobs.
Bull. Idle standby is just _so_ early-90s. I worked on eight-node mutual-standby (i.e. load sharing with potential for full failover) clusters in '94. We were before most people, but not first. Nowadays almost nobody would buy an HA solution without this capability.
Re:What about a Distibuted computer ? (Score:1)
Re:What about a Distibuted computer ? (Score:1)
Check out http://www.mosix.cs.huji.ac.il
O'Reilly is coming with a new book (Score:1)
I think some of you are going to find it quite interesting.
Note that I am working on my PhD thesis in this field of research (specialized in MPI), and we have softwares available at : http://www.itl.nist.gov/div895/savg/auto/ [nist.gov] designed to help user work with data-types in MPI.
Please drop me a note at martial.michel@nist.gov [mailto] if you desire more information on our project (we hope it will be added on the CD-Rom of the O'Reilly book).
Re:They forgot Condor... (Score:2)
As for the multiple interfaces, that is now supported as well. Please look at section 3.11.8 in the V6.1 manual
Ah! Thank you. I'll test it at once! :-)
Re:Yes, but it depends on the software (Score:1)
Disclaimer: I'm using just a single server for development, and it's on the NT Workstation partition. Caché is supported on RH 6.1, I have SuSE, which is rumored to work with a few tweaks but I haven't gotten around to installing Caché on it yet - probably will though because the Linux version is also rumored to be very fast :-)