Linux Clustering Cabal project 59
RayChuang turned us on to
this ZDnet story about the Linux Clustering Cabal project, which, Ray says, is "...the one that will allow Linux server clustering of many server machines. Sounds like just the thing to finally get eBay working reliabily and also make John C. Dvorak eat his words about the deficiencies of Linux."
Re:But how does it work? (Score:2)
Re:But how does it work? (Score:1)
Yes those are valid, but not clustering (Score:2)
Re:Is there a link for this? (Score:2)
There isn't a whole lot there right now.
Re:What John C. Dvorak wrote (Score:1)
Here are some more projects that might be worthwhile:
- The Bob Metcalfe Word-Eating Project
- The SCO Project
- The Mindcraft Project
Licensing (Score:1)
First off, the product isn't licensed as free software or OSI Certified -- because there's not yet either product or a license (which is to say, WRT product, there's a program, but it's not yet product).
From what I've pieced together of comments of Larry's (SVLUG), web blurbs (his, others), and the license sketch currently on the download site, terms will be liberal but not quite free. Larry likes the idea of free software, but isn't convinced he can make a commercial go of it all in and of itself. Specifically, my impression was that the source is available and hackable (a specific requirement of Alan Cox, per Larry).
Given that his business model right now is sort of half-software house, half-consulting services (SW: BitKeeper and others, services, Hunkin' Big Clusters), I'd like to hope he eventually discovers he doesn't have to worry quite so much about this. Along the lines of Cygnus.
For insight on licensing, you might want to read:
Note that the most commonly cited alternatives to Larry's solution all have pretty heavy consequences:
The BitKeeper license is most like the SCSL, though the intent seems to be to build a code escrow term into it which reverts to GPL should BitKeeper fold or fail to maintain the source.
Addressing specific points of your post, certain libs of BitKeeper will be GPLd or LGPLd, allowing them to be redistributed or incorporated into projects under terms of the GNU [L]GPL.
WRT your bugfix and feature comments -- the BitKeeper license is oriented around limiting potential for fragmentation. It's got some elements of the common view of the xBSD development model (centrally controlled cabal), and I'll share your view that this is, if not a Bad Thing, at least a Thing of Questionable Worth (TM).
I can't see how your last point (source is still closed) stands with your other arguments. The source is available, it can be reviewd, modified, and mucked with, It's not compliante with the OSD, but it's certainly not proprietary either.
Larry's blazing a new path here, it'll be interesting to see how it plays out.
Re:Are they trying to duplicate SGI? (Score:1)
They thought they did have some competitive advantage on the Intel platform, and technically speaking, they did, with their UMA architecture and strong texturing and video capabilities. However, SGI's woolly thinking since 1996 has been overlooking the fact that a differentiated product is not enough, you have to be differentiated in an area that adds significant value to a significantly large market.
(for the cognosti, there is nothing technically inferior about the MIPS architecture)
I agree but this is pretty irrelevant. At the end of the day, it's all economics, and MIPS and other RISCs have steadily lost substantial price/performance ground to Intel, with no business model that ever made sense to regain it (i.e one amortizing both fab and multi-team design investments over relatively small volumes. Their embedded strategy overlooked the fact that volumes don't cut your high-end processor design costs.)
The engineers there (grossly generalizing here) got really excited about texture mapping but the bulk (40+%) of the workstation markets didn't need it very much -- CAD engineers wanted more polygons, not texture fillrate. Sun and HP paid more attention to CAD and stopped SGI's growth in its tracks. The texture-intensive "entertainment/digital content creation market" was growing from 10s of millions and never bulked up enough to help save SGI.
If you work for a company, you'd realise that the first law is survival which is depedent on their market relevance.
I agree totally, and this is exactly what has been giving SGI such trouble. They've focused more on where they could do interesting cutting-edge differentiated things than on where they could be most relevant to the largest segment of customers.
In hindsight, their timing was just bad- they should have either gone NT a couple years earlier or held off till later (a la Sun). And they haven't to this day figured out how to reduce engineering cycle times for their products down to PC standards of 6-12 months for each new product, not 3-4 years.
I still wish em the best, but find it hard to put much hope in them at this point.
--LP
Re:What I need for clustering (Score:1)
Re:Are they trying to duplicate SGI? (Score:3)
Note that SGI is showing all the signs of entering the death throes stage. Another 30% of the workforce laid off, abandoning major initiatives, CEO bailing (to MS!!), loss of faith by major customers.
Unless you've got inside information (which the SEC would be very interested in hearing about), I think the slashdot audience would appreciate more evidence than mindless parrotting of popular press. For your information, they are spinning off several portions of their divisions into separate business entities. Now while some people may consider this akin to kicking fledgings out the nest, the rate of turnover in Silcon Valley is such that the difference between working for one company vs another is just which branded T-shirt you wear. Think of it as a beehive with clumps forming and dispersing to form interesting new combinations. Abandoning major initiatives?, how many announcements have you've heard from major companies that have died the silent death of being irrelevant to real needs.
As for the CEO, well, I'm sure there will be some interesting books a few years down the track but for many hard-core SGI purchasers, the shift into Intel consumerism where they did not have any competitive advantaged showed some very wooly thinking (for the cognosti, there is nothing technically inferior about the MIPS architecture). The loss of customers is not surprising considering that many applications that used to be top-end in the 70s can now run on a single modern processor and big cache (the refuge of the lazy microarchitect). Getting a free ride from Moore's Law is not the same as coming up with innovative new software applciations that can really take advantage of increased CPU capacity (apart from molecular simulations which will chew up any CPU cycle you throw at them).
Customers will buy SGI equipment if SGI can show they offer a value proposition that is worth the premium over mainstream machines, whether it is memory latency, quality engineering, coolness factor or whateever, people will buy (oh and getting their manufacturing/distribution process to be more efficient would help a lot). Computers are becoming so prevalent that the only distinguishing feature nowadays for PCs is image and lifestyle (does the color clash with the decor
Reasonable people must expect that SGI goes Chapter 11 RSN (barring a government bailout) and then what happens to people who need supercomputers?
Would you say Apple devotees are unreasonable? Don't you understand that given a planet of 5 billion odd people, not everyone is interested in the toys you are? Cries of doom and gloom have always been around in any industry in one form or another as it gives paper pushers a reason to justify their existance instead of getting their hands dirty coding or designing. You have to realise that SGI serves a fairly specialised market (data intensive, high-end graphics, scientific back-end grunt machines) in the 50K-50M range. Much like Porsche and BMW cater towards a cliental that wants absolute performance and not cheap consumer junk (admitedly the Japanese have given the US auto industry a shot in the arm since 80s), there will always be people who appreciate the qualities that SGI offers. Provided SGI can continue to support those companies and not go around trying to push Porshes for people wanting bicyles (amazing how hype can convince people they need a Pentium III to browse the web) at an affordable price, they will survive.
If you work for a company, you'd realise that the first law is survival which is depedent on their market relevance. SGI will continue so long as their is a demand for their expertise as priced compared with other market alternatives.
LL
Other references (Score:3)
Greg Pfister's book is good -- the details are somewhat dated, though the conceptual portion appears to be aging well.
Distributed net has a page with references for other texts [distributed.net] on clustering. `Course, you can always check out the related book purchases links at Amazon.
right and wrong (Score:4)
But clustering is very different from the examples you give. It's not running different services on different machines. It is taking a bunch of machines and making them act as one.
Beowulf-style clusters are one way of doing this, but there's a limit to how many nodes you can connect that way and still get performance increases. It scales up, but probably not to thousands of nodes. Now, the LCC people obviously haven't built anything to prove that they can do better, but it sounds like they may have a theoretical improvement.
And, it's only hinted in the article ("satisifies both commercial data processing and HPC requirements"), but it's possible also that this technology is not only fast, but unlike Beowulf also provides improved robustness.
This is all vapor now of course. But we'll see. The people working on this have some important projects to their credit.
--
Re:But how does it work? (Score:1)
What John C. Dvorak wrote (Score:1)
I will find you the url to John C. Dvorak's article if you want it.
Re:*cough* Clustering 'new'? (Score:1)
The Operating Slashdot Would Be Running On
If Unix Weren't Around[TM]. You can put a
VMS cluster behind a single IP address and then
just throw machines at the cluster at will. On another cluster, you have a single logical Oracle or Rdb database instance and do the same - scale by throwing machines at that cluster. IMVHO, it's way superior to what the Unix guys provide at the moment (said the guy who had a VMS cluster running in his attic for years
I do remember, though, that a number of features from VMS clusters were implemented by special hardware: multi-hosted hard drives (DSSI) that could participate as a voting member of the cluster, boxes with cluster-wide shared memory, etcetera. I'm interested to see how they work around that (I assume they restrict themselves to software).
Expertise -- McVoy formerly worked at SGI (Score:2)
There's a bit of info at his homepage [bitmover.com] and resume [bitmover.com]. I think you might have found your sophisticated know-how.
Re:Expertise -- McVoy formerly worked at SGI (Score:1)
apparently confused the BitKeeper licensing model
with the clustering stuff. The two are seperate,
BitKeeper has it's own license, and as far as I
know 100% of the work we are doing for clustering
is GPLed, not even LGPLed, straight GPL.
That said, I'll respond to some of the innaccurate
summaries of the BK license:
a) you can't take bits of the product and use it.
That's basically true, you have to ask first. But
for chunks that don't compete with BK directly,
we'll happily free them. The most obvious one
is the mmaped / anonymous DBM lib we wrote and
that will be released under the GPL. A somewhat
different version of the same code is in the
process of being released under the GPL by SGI,
or so I've been told.
b) You can't redistribute the product.
That's just plain false. You absolutely can
redistribute it, without fee. However, if you
modified it, it has to pass our extensive
regression test.
Etc. Since the BK license isn't complete yet,
I'd thank people like Anonymous Coward to wait
and see. We're trying to be good guys and make
a living. Since we did all the work, we get to
choose how we make that living. But we are
definitely committed to letting people who are
working on public projects to use this for free,
and we will try to be accomodating to people at
research institutions (hey, Nat) who want to use
it but can't afford it.
Re:What I need for clustering (Score:1)
We could use this... (Score:1)
(actually, although I know something like this could have many far-reaching useful applications, I'd be happy with web sites that aren't susceptible to the slashdot effect.
*cough* Clustering 'new'? (Score:5)
Clustering isn't ground-breaking technology.. it's been around for a long time. Now, the concept of parallel processing has been around for a long time too... and it doesn't seem like many manufacturers are rushing to get their products working on beowulf clusters.
This isn't to say it isn't a great idea - it's just that there isn't any support for it. There's plenty of alternatives too. For example:
Webservers: Set up several servers, and an SQL backend (or an NFS mounted partition) to hold the content. For added speed, throw squid over that setup. You can even tell remote caches to access your servers round-robin style by putting in multiple 'A' records.
DNS/mail: Heh. Even the IETF got this one right by suggesting primary and secondary DNS.
Filesharing: There is some work being done to create a 'real' beowulf cluster to create something of a decentralized logical file server. For now, use AFS or CODA.. which have all kinds of cool performance benefits. As an aside - both are a helluva lot more stable than the Nightmare File System (NFS).
Printing: They have affordable net appliances to do this (HP print server anyone?), and even some printers support direct access. Failing that, setting up multiple servers for multiple printers works pretty well - This is decentralized by design anyway...
So there you have it... all the staples of the corporate network - "clusterized". New technology? I don't think so. All the examples I gave you are in wide use (and have been for some time!).
--
But how does it work? (Score:5)
For those who want some background on the important issues, I highly recommend Gregory Pfister's book In Search of Clusters [fatbrain.com] . Clustering is a lot harder than most people realize, and people should not ignore the work that's been done before in this area. The important question for LCC is what is fundamentally new in their design. I doubt that the lack of kernel locks is really it.
The thing that remains to be seen is what set of applications they target, and what tradeoffs they make to support those applications. The fundamental issues in clustering have been addressed by a large number of research projects and products, and I'd like to know what's new about LCC.
That being said, I'm happy that some smart people are going after this problem!
duh. hasnt this been done before ? (Score:1)
or eddie http://apps.freshmeat.net/download/924568847/ ?
Slow down ... (Score:1)
One thing though, given the amount of raw CPU power and throughput required now and in the future it is great to read something like this. It is something one company alone cannot keep up with.
Are they trying to duplicate SGI? (Score:4)
Given the direction that SGI is heading (Linux for entry-level&apps + IRIX kernel extensions for high-end) I would wonder whether the LCC would produce anything practical in a realistic time-frame. This is not to decry their laudable efforts and I would hope businesses are patient enough to wait for robust and cheap solutions. If nothing else, it will hopefully offer a shardardised set of software extensions (a la OpenMP [openmp.org]) and coding practices so that a single source tree can support 1 to n processors.
Who knows, they might be able to come up with a few tricks that the pros have missed.
LL
Trend Setting (Score:1)
Re:clustering, distributed file systems (Score:1)
Wow, new record ! (Score:1)
So some well known people are somewhat involved in some project that has a three-letter-acronym for Linux [buzzword] Cluster [buzzword] Cabal [you need three words to make a TLA].
What are they aiming for ? Is development going on at all ? Do they have _any_ goals yet, except to make this cluster stuff and put the rest of the cluster stuff projects to sleep ?
If anyone knows more than was put in that [cough] ``article'', I'd be delighted to know about it.
Or, perhaps it will get posted once they get their record...
clustering, distributed file systems (Score:1)
clustering and distributed file systems?
As far as I know, clustering combines cpu, while dfs combines disk space. I may be wrong, so please correct that assumption if I am.
Are there any other differences?
Difference between this and beowolf? (Score:1)
Re:Are they trying to duplicate SGI? (Score:1)
Argh, bad factfinding (Score:2)
I've been seeing mentions of Braam as "head of the Coda project" and "the man who created Coda" a lot recently, and it's starting to get annoying. Does nobody do any fact checking anymore?
Re:Expertise -- McVoy formerly worked at SGI (Score:1)
Re:Difference between this and beowolf? (Score:1)
SGI duplicating Apple :P (Score:3)
Licensing of clones, the Newton, the eMate, etc. They were losing major money/resources and got rid of people. Their CEO left, and everyone thought they were going to die.
They rehired Steve Jobs, trimmed their products down to their core strengths, and are now worth more than they ever have been before.
So SGI, but spinning off and properly marketing their strengths(without tying them down to SGI) such as MIPS and Cray and their VisualPC stations, while focusing on their Irix high end supercomputing, and Linux on their low end desktop workstations, gives them a reasonable future. If they can focus on their core strengths and not waver or get distracted...
It's a perfect chance to buy their stock at 11 and (hopefully) see it go to 40!
-AS
Another sign of end of the supercomputer business (Score:1)
TurboLinux's Cluster project (Score:1)
Re:Are they trying to duplicate SGI? (Score:1)
Re:Another sign of end of the supercomputer busine (Score:1)
But not to 1024 nodes! (Score:1)
It's a pity the the article doesn't have more detail - my reading is that it's a statement of intent for now
Leave them alone and let's see what they can come up with.
What I need for clustering (Score:4)
I'd also be interested in hearing about any Free Software databases that can do this sort of synchronization. Thanks
Bruce
Re:right and wrong (Score:1)
AFAIK, Beowulf is not really a general clusting solution. Beowulf is more concerned with parallel processing then general clustering. PP takes a problem and breaks it down into many small pieces, distributes those pieces to a bunch of nodes, sets them working, and then collects the result. Your application has to be written specifically for Beowulf, and each node is distinct.
General clustering is, as you say, making many machines appear as one, not only to the outside world, but to the processes running in the cluster. Ideally, a cluster is no different from a single machine. In practice, it gets a little more complex (your applications typically need to be cluster-aware), but from a user POV, it should appear, roughly, as "one big machine".