Ask Donald Becker 273
This is a "needs no introduction" introduction, because Donald Becker is one of the people who has been most influential in making GNU/Linux a usable operating system, and is also one of the "fathers" of Beowulf and commodity supercomputing clusters in general. Usual Slashdot interview rules apply, plus a special one for this interview only: "What if we made a Beowulf cluster of these?" is not an appropriate question.
Re:One question... (Score:1, Insightful)
Re:Role of GNU in GNU/Linux (Score:0, Insightful)
Great timing for a change also, with all the new users coming to this OS. Let's confuse them.
So why call it GNU/Linux? Just to satisfy the ravings of RMS who can't get enough attention to his political (not technical) causes by himself.
Re:Role of GNU in GNU/Linux (Score:3, Insightful)
Even Linus doesn't feel strongly one way or the other. The only person who seems to be working up a lather is RMS. It's sad.
Re:Role of GNU in GNU/Linux (Score:2, Insightful)
Bell/AT&T/GNU/Linux forever!
Thanks for all the Ethernet drivers, Don! (Score:5, Insightful)
Why is it? (Score:3, Insightful)
It would be nice to have an anecdote or two about your years with Steely Dan - or even the solo projects from the '80's.
Re:Message Passing vs. Single System Image (Score:4, Insightful)
The traditional approach is to use fine grained locking in the kernel, but this tends to lead to unmaintainable code and low performance on lower end systems. For an example of this see Solaris, or most other big iron unix kernels.
Another approach is the OS cluster idea championed by Larry McVoy (the Bitkeeper guy). The idea is that you run many kernels on the same computer, one kernel takes care of something like 4-8 cpu:s. And then they cooperate somehow so they can give the impression of SSI.
A third approach seems to be the K42 exokernel project by IBM. They claim very good scalability without complicated lock hierarchies. The basic design idea seems to be to avoid global data whenever possible. Perhaps someone more knowledgeable might shed more light on this...
But anyway, until someone comes up with a kernel that scales to zillions of cpu:s, message passing is about the only way to go. Libraries the give you the illusion of using threads but are actually using message passing underneath might ease the pain somewhat, but for some reason they have not become popular. Perhaps there is too much overhead. And some people claim that giving the programmer the illusion that all memory access is equal speed leads to slow code. The same argument also applies to NUMA systems.
And on the system administration side of things, projects like mosix and bproc already today give you the impression of a single system image. Of course your application still has to use message passing, but administration and maintenance of a cluster is greatly simplified.
Re:What comes next? (Score:3, Insightful)
Isn't that a limitation of the computer, not a limitation of gigabit Ethernet?
Re:OpenMosix, really Distributed Shared Memory (Score:2, Insightful)
Mosix is almost completely unrelated to DSM. While I think Mosix is a very interesting academic project, it's the wrong model to build scalable performance clusters. Cluster applications don't want transparent process migration with forwarded paging and I/O. They want to explicitly and quickly start up processes on remote machines, and have direct control over the performance-critical I/O and communication paths.