Mosix 1.0 Released 77
Mosix is a scalable clustering system for Linux, released under the GPL. Version 1.0 for the 2.4 kernels is now available.
The use of money is all the advantage there is to having money. -- B. Franklin
Re:cross-platform clusters? (Score:1)
The way I understand this, is that the porch tool only does half of the work : porch only prepare a given software source to be used in a heterogenous cluster.
You still need the heterogenous cluster managing software (in this case, the part that would handle the portable checkpoints - transfer them from one node to another).
Anyway, the porch tool (or something equivalent) seems necessary.
Re:Funny. I thought it... (Score:1)
Re:What about data from the harddisk (Score:1)
Idea Customize: ignore funny (Score:1)
Re:bad news (Score:1)
Re:Mosix is definitely cool... (Score:1)
Even with the machinery we've got now you can't do, for instance, first principles simulations of the growth of semiconductor structures.
Re:cross-platform clusters? (Score:1)
Too bad it's x86 only... (Score:1)
Re:Too bad it's x86 only... (Score:1)
Re:Finally, something resembling clustering for Li (Score:1)
Re:[OT] deluge of overrated posts (Score:1)
Actualy, you could do it in two dimentions; up/down being good/bad and left/right being ontopic/offtopic, and then slash could render a little map showing where the post falls!.
Linux's malloc() is broken (Score:1)
As an anal-retentive programmer, I do always check my mallocs for NULL and my news for std::bad_alloc. Whether this is actually valuable is debatable, however. On Linux, malloc() does NOT fail under memory pressure because Linux does not commit the malloc'd page until it is actually faulted in. So your malloc() will return a non-NULL pointer as a weak promise to find an available page later. When you actually touch the page and Linux can't find a free page, it will simply kill your process.
This is a similar problem to Linux's fsync() which can be tricked by some hard disk caches claiming to have written the data safely to disk.. and then crashing with the data still in cache.
RPC (Score:1)
btw, this is the same problem that early RPC systems faced. Programmers don't typically expect a procedure call to fail mysteriously, but this is exactly what can happen when your blocking RPC call can't reach the other server.
the blue in the header (Score:1)
So it's a load balancing tool? (Score:1)
Then what about the performance of this stuff?
And is the design of this software interesting enough to support it?
Is anyone really using it?
Is an HA cluster a paralell system? (Score:1)
By HA cluster I mean a set of computers bundled together to build up redundancy to protect the application(s) running on it against hardware failure.
I think this protection gets more important the bigger you build your paralell systems. What would the value of a paralell system be if one of it's million parts brings it down completly?
To take this drift even further, I think that for large dimension paralell systems there is no way avoiding the NUMA approach. Performance is just one of the problems you'll face when building a paralell system, even though it may be the most important reason for starting to build them.
In the end it all comes down to scalability and the cost of expanding.
HA cluster or HPC cluster? (Score:1)
a. a way to create a huge compute node (as in Beowulf cluster)
or is it:
b. a way to create a scalable and robust service node (as in a google cluster)?
I think it could be quite interesting to use this software, but to get a large userbase (and thus robust and well debugged software) it seems to me that there should be a well defined goal.
I'm not trying to post a troll, but I'm trying to find out if:
- this software would make hardware failures have less inpact on running programs like database servers or large simulation jobs?
- this software has a greater impact on performance than MPI alike solutions
- this software has security impact of any sort?
Just my 2 eurocents
What about data from the harddisk (Score:1)
You could ofcause let the kernel handle forward all request but that might cause insane amouts of network trafic.
So my question is? Can this cluster solution with reason be used with a webservers/sql server or is this only ment to be used for programs who don't read much from the harddisk?
Re:Just keep in mind... (Score:1)
By the way... (Score:1)
Re:[OT] deluge of overrated posts (Score:1)
-Erik
Re:Mosix is nothing new. (Score:1)
I just wish that I could see it all in fast motion, whereas the reality is that I probably won't even notice. "Microsoft? What's that?"
Re:Mosix is nothing new. (Score:1)
That's a brilliant strategy for world domination. Linux does not need to be innovative or profitable. It just needs to stick around and stay in the game. Sooner or later another niche will erupt -- like the internet -- and Linux will be there to explode into it and overwhelm it -- as long as it stays in the game.
The bar for Linux's survival is much lower than is the bar for Microsoft, for example. Microsoft needs to perpetually remain profitable. Also they need to keep growing to be a more attractive investment to stock-holders than a savings account.
Linux just needs to hang out in prowl until the times comes for it to dominate.
--
Milk, it does a body good.
Re:HA cluster or HPC cluster? (Score:1)
If only (Score:1)
I could find an application that would play nice with it. MySQL, PostgreSQL, dnet, Q3A map complies ...
** sigh **until (succeed) try { again(); }
Re:Single Disk Mosix (Score:1)
boot from floppy for all mosix child nodes
child nodes are diskLESS machines btw
Let primary do the disk storage
let mosix cluster do the work
i know it has some flaws, but that's my solution... for my home "system"
Funny. I thought it... (Score:1)
Single Disk Mosix (Score:1)
Re:Finally, something resembling clustering for Li (Score:1)
Agreed. Beowulf is really more of an array computing model than stuff that is traditionally referred to as cluster-computing.
Back in olden times-- circa 1995, I think-- SGI had a set of software and an API for their Challenge servers that allowed customers to configure them as arrays. I can't remember the marketing-name of the product, but I do remember that parallel HIPPI was the preferred interconnect (still damn fast, six years later) and some customers had arrays consisting of a great many 36-processor nodes. Pretty cool product. Very similar in concept to a Beowulf cluster using gigabit-e or Myrinet as the interconnect.
Re:[OT] deluge of overrated posts (Score:1)
BTW, Einstein would use "No Score +1 Bonus" while posting about bad weather. And so do I.
Re:BS (Score:1)
Re:[OT] deluge of overrated posts (Score:1)
The other possibility I can think of is that within the past month a shipload of fake accounts came into fruition (you know how you can't moderate for a long time while your account is "new"?) I've always thought it would be cool to set up some scripts to generate a few hundred accounts and to actually make them intelligently enough spoofed so that slashdot can't tell they're not people. Then, as soon as they can start to moderate, five to ten are guaranteed to be moderators at any given time and then I can find some old threads of mine no one will look at and moderate them up to hell. Of course, usually this thirty-second daydream ends with the thought: "and then what? get fifty karma? Dude, you're such a loser!"
Anyway, even though the Karma isn't worth it for me, nor the power especially of being able to bitch-slap whoever, I bet some other people did that a few months back and we're starting to see the fruit of their loins, or something.
Off-topic: the quote on the bottom of my page says now, "There is one way to find out if a man is honest -- ask him. If he says "Yes" you know he is crooked. -- Groucho Marx". A more mathematical way to arrive at a (more guaranteed-correct) answer is to ask the man whether he WOULD say he's honest IF you asked him. Then his answer is in fact the truth. :)
~
Re:Too bad it's x86 only... (Score:1)
Note this is also my peeve about WinClusters (NT or 2K), the job was half done.
Re:Finally, something resembling clustering for Li (Score:1)
Re:Idea Customize: ignore funny (Score:1)
________________________________________________
Re:Too bad it's x86 only... (Score:1)
I just like to see a 1.0 for anything Linux. It means that it is a mature product with a really stable API. This is the opposite of most commercial software manufacturers that I have a dealing with. These guys can't get a stable release after 5 or 6 releases. I hate that.
Re:Mosix is nothing new. (Score:1)
>That's a brilliant strategy for world domination. >Linux does not need to be innovative or profitable.
You're absolutely right! That's why innovative companies like Microsoft will never achieve world domination, and ...
micje eyes himself warily...
has anyone.. (Score:1)
Can you imagine... (Score:1)
--
cross-platform clusters? (Score:1)
-strfn
Re:Just keep in mind... (Score:1)
Most JAVA VMs use shared memory, and thus can not migrate. Try using a "green threads" VM.
but i don't know what a "green threads VM" is. in theroy java apps would be great for clustering, no?
compile? (Score:1)
Re:Mosix is nothing new. (Score:1)
The folks writing operating systems, or software intended for the public, are usually trying to take the stuff coming out of academia and optimize it, get it relatively bug-free, package it up in a reasonably friendly fashion, things like that. Linux developers are certainly not always the first to do this, but then again, given that there are eight or ten OSes people are developing for, we wouldn't expect them to be.
Nor is Linux necessarily the best environment to pioneer for. If I'm the first guy trying to write a clustering system fit for ditributing to the world, I may well want to do it on a simpler sort of system- something written with the idea of being very stable and well-organized, say, like NetBSD, as opposed to something written with the idea of being very hardware-supporting and practical, like Linux. That doesn't mean that 'NetBSD is doing the innvoation' and 'Linux is copying it later'. The 'Linux community' isn't some sort of absolute, to which you have either given your soul or have no part in.
To the extent that the 'Linux community' does exist, it has given a great many things to the larger IT community. That it is often focused on writing open-source or portable things that have existed in more closed form before, does not particularly seem to me like a mark against the importance of its efforts.
Re:Finally, something resembling clustering for Li (Score:1)
Thanks for the link. I had not seen this, and found it very interesting.
This is a very primative form of clustering (share-nothing due to the lack of distributed lock manager, simple failover only, two node limit) similar to Microsoft's so-called clustering solution. A step up, but still very limited...
I guess I'm just spoiled by OpenVMS clustering. When I set up our Exchange 2000 clustered back-end servers, I constantly found myself astonished by the primative capabilities of Microsoft's clustering technologies. "What do you mean more than one node can't control this disk at the same time? That's a breeze with MSCP disk sharing, and has been for a decade and a half. Can't have hundreds of nodes separated by hundreds of miles for disaster-tolerance? Give me a break..."
Re:sweet (Score:1)
Mo'six It's a steal! (Score:1)
hmmmm (Score:2)
Re:learn your history (Score:2)
That's the basic idea, suitable for spreading out CPU-heavy tasks that share nothing over a bunch of machines. It looks like they're working on extensions such as migratable sockets which would make it suitable for applications that require sharing or communication.
Just tested out 0.98 (Score:2)
Re:RPC (Score:2)
Rob
Sun can do this? HP? (Score:2)
This is not 'beowulf' clustering... this is not parallel tasking.. this is having portions of processes automatically migrate to other machines in a cluseter based on memory/cycle availability.
This is not rubbish. Mosix has been around for a while, but it's great to see version 1.0
Microsoft? I thought it was K&R (Score:2)
Microsoft???
I thought it was the light-blue C from the cover of the original Kernighan and Ritchee language manual.
On second thought... (Score:2)
On second thought it IS a bit dark for that, at least on my current montior.
Re:[OT] deluge of overrated posts (Score:2)
Do people actually feel compelled to moderate? I frequently don't. I can take it or leave it. Maybe something in the FAQ could address this (tho then you have to get people to RTFF).
--
Re:HA cluster or HPC cluster? (Score:2)
For a "web cluster", you want something like this:
http://linuxvirtualserver.org/
This is a combination of load balancing and high availability. Machine A load-balances web traffic between machines C,D,E, and F. Machine B monitors machine A, and takes over for it if it goes down for more than 4 seconds. They've got various algorithms for load balancing.
Sotto la panca, la capra crepa
woohoo! (Score:2)
Roy Miller
Re:[OT] deluge of overrated posts (Score:2)
I do think that the cap on moderation of a post should be upped, maybe not all the way to 10 though.
Re:woohoo! (Score:2)
Of course, by the time we have that much bandwidth, we will probably have computers running at 20 GigaHertz (if that is physically possible, and then again, probably if it is impossible, too). So their is a good chance we won't need to export processes to Brazil across 12 foot wide optical cables.
bad news (Score:2)
[LM]?[IOU]N[IU]X
Finally, something resembling clustering for Linux (Score:2)
Re:Just keep in mind... (Score:3)
Re:Finally, something resembling clustering for Li (Score:3)
Repeat after me "THERE ARE MANY KINDS OF CLUSTERS". Again...again...again....
Now we will play a game: match the clustering technology description to a popular name. Match the letter to the number.
A. Message passing clusters used primarily for low bandwidth parralelel computation.
B. Load balanced single protocol network clustering.
C. Hardware takeover / hardware redudency for Hi-Availability clustering.
D. Load balanced, homogeneous platform, with process migration clustering.
1. Veritas Cluster Server with Sun Multipath IO devices.
2. Arrowpoint-type web load balancer.
4. Mosix.
3. Beowulf.
For extra credit:
Is the above listing of clustering technologies comprehensive? [Y]es [N]o
Answers available from those with a clue after class.
Re:[OT] deluge of overrated posts (Score:3)
There used to be a moderation category that was "just the best, most pithy synopses of the dicussion". Now that can easiy be 30 posts, and reading them doesn't fit in 3 minute "while this compiles" break anymore.
Part of it is that there's more posters these days, and more moderators, and the top 5% of 50 posts is a lot smaller than the top 5% of 500 posts.
Part of it is the automatic +1 of posters with a history of good karma. This is a good thing, but it reduces by 25% the range that can only be reached by active moderation. (The original moderation range of 2-5 has been reduced to 3-5. You used to be able to read at 2 and filter out the stuff that hadn't been voluntarily moderated up at least once. That's no longer the case, and even Einstein wasn't ALWAYS worth listening to. Sometimes he was just ordering breakfast, or complaining about the weather.)
Zero used to be a penalty for posting as an anonymous coward (since the troll ratio there was higher). 1 was standard. 2 being experienced poster who generally has somethng to say, that's meaningfull. This is a good heuristic for a starting position, but there's not enough room to go up fromt here, the system is swamped.
Slashdot has outgrown that range, even WITHOUT raising the floor. More marginal opinions less universally approved of (and less central to the topic) now reach the top category, because they have more opportunities to be moderated up. 5% of the viewership can easily spend 5 moderation points now.
perhaps we can go to a moderation percentage system? "Show me just the top 5% of posts"? Or sort them by popularity and give me the top fifteen...
It's an interesting problem.
Rob
Re:Finally, something resembling clustering for Li (Score:3)
>CLUSTERS". Again...again...again....
>Now we will play a game: match the clustering
>technology description to a popular name. Match
>the letter to the number.
Berries come in clusters. Stars come in clusters. Military rank insignia come in clusters...
Californians... No wait, this is a family oriented area.
Rob
(Austinite. They move here and can't drive, so we get to make fun of them.)
Re:Distributed system failure? (Score:3)
>have different failure characteristics.
It's a question of what problems you want to address. It's entirely possible to have multitasking multiuser operating systems without virtual memory. (Just about every 1970's era unix before the Vax, actually.)
Doesn't make the problem fundamentally different, just that there's more cases to cover. Do you always check for a non-null return from your mallocs, or do you just say "the system should just never run out of memory"?
>As far as I know, an SMP operating system
>assumes that, if CPU #2 was there just a moment
>ago, it will still be there.
Three words: Hot pluggable hardware.
And yes, they're talking about adding that capability to the Linux kernel in 2.5. (Although the current patch has a /proc entry to switch the appropriate processors of and on before just yanking them. Then again, PCMCIA proves you can do it without manual notification since you get several miliseconds of warning, which is ages to the computer...)
>What happens when your operating system needs to
>fault in a page, but your distributed VM manager
>lost network contact with your other server(s)?
Well, when piranha.rutgers.edu did this (no local hard drive, it swapped through the network to the server in the back room), its response was to die spectacularly (sunOS didn't blue screen, it white screened). This is not a new problem.
Then again, how many apps never check the return value of malloc and just expect the OS to go down if the system runs out of memory anyway?
If you were really swapping through the network (despite hard drives being cheap they ARE failure-prone moving parts), I'd say use distributed redundant swap devices and treat them like RAID 5 so you can loose one and recover the data? Also avoids network bottlenecks. But then you're eating network bandwidth needlessly, which is usually your limiting factor. (Then again, you page fault all sorts of other stuff through the network anyway in a shared memory config, it wouldn't so much be swapping as a larger distributed memory management system.)
It's an open question on the best way to go. Performance vs reliability is often a tradeoff. But there are PLENTY of different options.
>How can the operating system handle this error :-(
>gracefully? Or politely warn the userspace
>application?
How does RAID 5 do it today? (Let's see, SMART disks, battery packed up power supplies notifying of failure, hot pluggable hardware... It'll probably all get molded together someday into pseudo-coherent infrastructure of dynamic system status.)
The most graceful thing for the OS to do may just be to suspend the app and save off its state until it can continue. It depends. As I said, there are a lot of options.
Rob
Distributed system failure? (Score:3)
SMP and NUMA are different problems because they have different failure characteristics. In distributed programming, you often must expect network failure to be a common occurence and handle those errors gracefully. As far as I know, an SMP operating system assumes that, if CPU #2 was there just a moment ago, it will still be there.
What happens when your operating system needs to fault in a page, but your distributed VM manager lost network contact with your other server(s)? How can the operating system handle this error gracefully? Or politely warn the userspace application?
I just wish... (Score:3)
CONFIG_MOSIX=y
It's a great chance that Linux doesn't only play catch up with Windows or other flavors of Unix - it can take the leader ship and give you the ability to create clusters using the tools in the standard distribution!
Lingering bugs (Score:3)
Mosix a valuable technology (Score:3)
Re:Finally, something resembling clustering for Li (Score:3)
There is a possibility of using MOSIX together with GFS [sistina.com] (which gives true device sharing) so that you don't need to use something like NFS. This way, a migrated process will be able to access the device directly, without needing to go through its home node.
AFAIK, this option is still not production-level, though.
Re:[OT] deluge of overrated posts (Score:4)
the irony is thick.
Mosix is definitely cool... (Score:4)
Re:BS (Score:4)
http://linuxtoday.com/news_story.php3?ltsn=2001
[OT] deluge of overrated posts (Score:4)
(just my drunken rambling)
--Ryan
learn your history (Score:4)
Yes, they did start out basing their system on proprietary kernels, then they moved to BSD, then to Linux. The current work is not about the basic idea anymore, moving processes around somehow, but about things like distributed virtual memory, distributed file systems, and migration strategies.
This isn't "playing catch-up", it is cutting edge research by the people who did the original work moving to the BSD and Linux platforms because they are more widely available, are better supported, are easier to license and share, and have more software available for them.
Re:Finally, something resembling clustering for Li (Score:4)
My comments were not rooted in ignorance, but rather an intimate familiarity with the technology developed by the DIGITAL engineers who INVENTED clustering, back in the days before the term was diluted by use in situations that have no relationship to the original application of the term.
And no, that list is far from complete. No mention of HP's clustering technology, Compaq's OpenVMS Clusters, True64 Unix TruClusters, or Tandem NonStop Fault-Tolerant clusters, or Microsoft Clustering (although, again, that is a VERY weak form of clustering, and lacking in several respects).
Essentially, this is an arguement over semantics, over the definition of the term "cluster". I merely oppose dilution of the meaning by applying it to lesser technologies which have no relationship with the original meaning of the term.
A quick primer on types of paralell systems. (Score:5)
SMP is Symmetrical Multi-Processing, or one computer with multiple processors just like multiple hard drives, multiple serial ports, or multiple banks of RAM. In an SMP setup, each processor has equal access to the other system resources, and although they may need locking to avoid stomping on each other's activities, it's no more expensive for processor #2 to access a certain resource (such as an area of main memory) than it is for processor #5 to do so. Thus there's no real reason to shuffle processes around to be "closer" to some other resource.
The other end of the spectrum is message passing networked clustering, like beowulf, where isolated systems (each with its associated set of resources) accept complete tasklets, work on them more or less alone, and output the results. Accessing resources from the rest of the cluster is very expensive, and you try not to do it more than absolutely necessary (once per transaction). A message comes in with all the info a node needs to do its work, and the node sends a message back out with the result and to announce it's ready for the next mouthful.
NUMA is in between, and it stands for Non-Uniform Memory Architecture. You have a bunch of similar processors, like in SMP, but some resources are "close" to each processor and some are far away.
Remember, clusters own resources outright, this is my node's memory. On SMP all processors access a pool of shared resources (like main memory) at the same speed (hence symmetrically). On NUMA, processor #53 -CAN- access memory over by processor #1736, but it'll take much longer than if it accesses memory near itself. It'll block, it'll have wait states. (Just like accessing a page swapped to the hard drive vs accessing one in memory.)
The thing is, as systems on either end become more complex they move towards NUMA. Think mondo SMP systems with dozens of processors, each of which has megabytes of L1 cache. You want to keep stuff "in cache" rather than accessing main memory, and sometimes you wan't to access something that's currently in some other processor's cache. Cache line pollution and such. That's a NUMA type of problem.
From the other end, once you start connecting beowulf clusters together with really high speed interconnects (like gigbit ethernet or myrinet, and often speed here is more a question of latency than bandwidth,) and start teaching them how to pretend to be one big shared memory image by page faulting through the network, you're approcaching NUMA from the other end. Stuff's in my machine's memory locally right now, and swapping it in from some other guy's memory (and swapping out some of my stuff to make room for it) is something I only want to do when absolutely necessary, because it slows me down.
MOSIX is taking beowulf clusters in the direction of NUMA. This is a good thing, it makes them more flexible and capable, but it opens up a whole can of worms to optimize it properly. (Not a new can of course, the kernel hackers are already dealing with a rather significant portion of NUMA's issues just trying to get 32 processor alphas to work smoothly.) If the interconnects between clusters were perfect, we could just treat it as one big SMP machine. Then again if our hard drives were as fast as our ram we wouldn't try so hard to minimize swapping, would we? You could still just treat MOSIX as SMP instead of NUMA if you don't want to optimize your performance. And for many things that's a fine solution, just distributing it cross the cluster gives you all the performance you need, and adding nodes is more cost effective than rewriting your app for greater speed in the new environment.
But performance hits of thrashing all your pages through the network can be just as bad as thrashing them in and out of the swap partition. And performance is the only reason we're using clusters in the first place, isn't it?
And NUMA optimization just makes maintaining locality of reference, streamlined locking, and minimizing contention for commonly accessed resources even MORE important. It's the same kind of thing you'd do on a normal SMP machine anyway, it just has more of an impact, because there's more inefficiency to optimize away.
Rob