Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

More From Tanenbaum 496

BigFire writes "Professor Tanenbaum responds to the slashdot effect and a small critique of Ken Brown's forthcoming book in his followup. A small gem is where he disclosed that Ken Brown can't multiply simple positive integers."
This discussion has been archived. No new comments can be posted.

More From Tanenbaum

Comments Filter:
  • Article text (Score:5, Informative)

    by Anonymous Coward on Friday May 21, 2004 @08:14PM (#9221718)

    Ken Brown's Motivation, Release 1.2

    Background

    On 20 May 2004, I posted a statement [cs.vu.nl] refuting the claim of Ken Brown, President of the Alexis de Tocqueville Institution [adti.net], that Linus Torvalds didn't write Linux. My statement was mentioned on Slashdot [slashdot.org], Groklaw [groklaw.net], and many other Internet news sites. This attention resulted in over 150,000 requests to our server in less than a day, which is still standing despite yesterday being a national holiday with no one there to stand next to it saying "You can do it. You can do it." Kudos to Sun Microsystems and the folks who built Apache. My statement was mirrored all over the Internet, so the number of true hits to it is probably a substantial multiple of that. There were also quite a few comments at Slashdot, Groklaw, and other sites, many of them about me. I had never engaged in remote multishrink psychoanalysis on this scale before, so it was a fascinating experience.

    The Brown Book

    I got an advance copy of Ken Brown's book. I think it is still under embargo, so I won't comment on it. Although I am not an investigative reporter, even I know it is unethical to discuss publications still under embargo. Some of us take ethics more seriously than others. So I won't even reveal the title. Let's call it The Brown Book. There is some precedent for nicknaming books after colors: The International Standard for the CD-ROM (IS 10149) is usually called The Red Book.

    Suffice it to say, there is a great deal to criticize in the book. I am sure that will happen when it is published. I may even help out.

    Brown's Motivation

    What prompted me to write this note today is an email I got yesterday. Actually, I got quite a few :-) , most of them thanking me for the historical material. One of yesterday's emails was from Linus, in response to an email from me apologizing for not letting him see my statement in advance. As a matter of courtesy, I did try but I was using his old transmeta.com address and didn't know his new one until I got a very kind email from Linus' father, a Finnish journalist.

    In his email, Linus said that Brown never contacted him. No email, no phone call, no personal interview. Nothing. Considering the fact that Brown was writing an explosive book in which he accused Linus of not being the author of Linux, you would think a serious author would at least confront the subject with the accusation and give him a chance to respond. What kind of a reporter talks to people on the periphery of the subject but fails to talk to the main player?

    Why did Brown fly all the way to Europe to interview me and (and according to an email I got from his seat-mate on the plane) one other person in Scandinavia, at considerable expense, and not at least call Linus? Even if he made a really bad choice of phone company, how much could that cost? Maybe a dollar? I call the U.S. all the time from Amsterdam. It is less than 5 cents a minute. How much could it cost to call California from D.C.?

    From reading all the comments posted yesterday, I am now beginning to get the picture. Apparently a lot of people (still) think that I 'hate' Linus for stealing all my glory (see below for more on this). I didn't realize this view was so widespread. I now suspect that Brown believed this, too, and thought that I would be happy to dump all over Linus to get 'revenge.' By flying to Amsterdam he thought he could dig up dirt on Linus and get me to speak evil of him. He thought I would back up his crazy claim that Linus stole Linux from me. Brown was wrong on two counts. First, I bear no 'grudge' against Linus at all. He wrote Linux himself and deserves the credit. Second, I am really not a mean person. Even if I were still angry with him aft

  • Okay. (Score:5, Informative)

    by mcc ( 14761 ) <amcclure@purdue.edu> on Friday May 21, 2004 @08:28PM (#9221806) Homepage
    It isn't that you're "out of the know" it's just that you didn't see the previous article on this subject on slashdot two days ago.

    You want to read this article. [slashdot.org] It should explain what is happening.

    And you would pick this up from the links, but just for the record: Tanenbaum is this european guy who once upon a time in the 80s wrote a textbook on operating systems which came with a simple UNIX-like operating system called "Minix". Ken Brown is some guy who works for something called the "Alex de Torqueville" (sic?) institute and he's writing a book which appears to mostly consist of slander against Linus Tourvalds and/or the Free Software movement.
  • Re:Little Help? (Score:5, Informative)

    by cmowire ( 254489 ) on Friday May 21, 2004 @08:36PM (#9221865) Homepage
    Prof. Tanenbaum made MINIX, which predates Linux and provided some inspiration, but no actual code. MINIX initially hosted the Linux environment until it was able to exist on its own. Prof. Tanenbaum and Linus had a massive flamefest early in the days of Linux over microkernel vs. monolithic kernel.

    Ken Brown works for the Alexis de Tocqueville Institution, who is basicly in the business of writing "impartial" reports for people with money. It's public knowlege that they've taken money from Microsoft in the past for reports. He is writing a book accusing Linus of not writing Linux.
  • Linux is Obsolete (Score:4, Informative)

    by jsse ( 254124 ) on Friday May 21, 2004 @08:39PM (#9221887) Homepage Journal
    First, I REALLY am not angry with Linus. HONEST. He's not angry with me either. I am not some kind of "sore loser" who feels he has been eclipsed by Linus. MINIX was only a kind of fun hobby for me.

    For the rest of you who don't know 'the past' Prof. Tanenbaum with Linus, you may refer to the famous mailing list log "Linux is Obsolete" [fluidsignal.com].

    Linus seems to be doing excellent work and I wish him much success in the future.

    So I guess Prof. Tanenbaum can give higher grade than "F" to Linus now. :)

    Both Prof. Tanenbaum and Linus are my favourite persons. I'm so happy to see this happy ending in real life. :~)
  • Re:Embargo? (Score:5, Informative)

    by radish ( 98371 ) on Friday May 21, 2004 @08:52PM (#9221947) Homepage
    Before a book is published the publishers/authors usually send out copies for review & comments. Then, if errors are found (for example) they can be corrected before going to print. The idea of the embargo is essentially like an NDA - because the version being read is not the final version, it would be unfair to talk publically about it.

    As for free speech - this isn't a legal thing, it's purely done out of respect for the publishing process and basic good manners. Once the final version is available it's fair game for anyone.
  • by brix ( 27642 ) on Friday May 21, 2004 @08:54PM (#9221958)
    Article is mirrored at Newsforge [newsforge.com].
  • by Foolhardy ( 664051 ) <`csmith32' `at' `gmail.com'> on Friday May 21, 2004 @09:01PM (#9221987)
    A small gem is where he disclosed that Ken Brown can't multiply simple positive integers.
    What, he doesn't have his times tables memorized? Neither do I, and it hasen't been a problem.
  • Re:Little Help? (Score:3, Informative)

    by bcrowell ( 177657 ) on Friday May 21, 2004 @09:09PM (#9222018) Homepage
    You're being way too fair to Microsoft. They bought a clone of CP/M ported to the 8086. Then, they sold a license for it to IBM.
    Really? Can you document that? I worked for Digital Research around that time, and there were many tales about how the MS/IBM thing happened, but I never heard this version. And what do you mean by "a clone of CP/M ported to the 8086"? There were three versions of CP/M at that point, one of which was an 8086 version.
  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Friday May 21, 2004 @09:25PM (#9222117)
    Comment removed based on user account deletion
  • Re:Little Help? (Score:4, Informative)

    by alangmead ( 109702 ) * on Friday May 21, 2004 @09:26PM (#9222126)

    Professor Andrew Tanenbaum is a professor who has written some great books. I'm very happy to have read read them. One of his books was Operatin g System Desgin and Implementation [bookpool.com] in which he describes operating systems with a toy, minimal, Unix-like operating system for the 8088 called Minix [minix.org]. It wasn't a really useful OS, but it was small enough to take a look at the code to any particular subsystem and learn how it worked. As an example of its mimilalism, it did have some hardware memory protection between processes, but did so with segment registers. That limited the size of each program to 64k.

    Minix wasn't free or open source software. (ideas that were pretty much in their infancy) Tanenbaum sold it through his book publisher. Not for much, probably just enough to make it worth Prentice-Hall's time. Without the Internet as a cost effective distribution medium, someone had to take the orders and mail the disks. People loved tinkering with Minux, though. They ported it to other platforms, (Atari ST, Amiga, Sparc, 80306, etc.) They added to it and started distributing patches. Linus was using Minix-386 before he managed to get Linux to be self-hosting. In some reports, it was Linus' annoyance at having to pay for Minux that inspired him to make Linux free software.

    Ken Brown, on the other hand, is someone whose name isn't very recognizable in technical circles. I'm tempted to say that he is a nobody, but maybe I just don't hang around the right circles. (Or on the other hand, maybe if I've never heard of him that means that I hang around the right circles.) I first read about the Alexis de Tocqueville Institution a couple of years ago when they published a paper [adti.net] questioning the security of free and open source software, and sold the paper in a through a system that allowed people to download the paper without purchasing it. Most of the links on their site are either links to articles from news sites about the institutes press releases, or links to papers that they promise will be ready soon.

  • by Xenographic ( 557057 ) on Friday May 21, 2004 @09:27PM (#9222130) Journal
    That's the extent of Tanenbaum's achievenemts.
    ----

    Eh? He did a bit more than just write MINIX!

    He's an IEEE & ACM fellow, has written a number of well-known and widely-used books... But don't take my word for it, Google away.

    As for sour grapes, I don't sense any, and I exchanged a few emails with him after the last story.
  • Re:Okay. (Score:3, Informative)

    by The Cydonian ( 603441 ) on Friday May 21, 2004 @10:10PM (#9222362) Homepage Journal
    Seems to be American [cs.vu.nl], judging by the flag on his photo.
  • by Anonymous Coward on Friday May 21, 2004 @10:11PM (#9222364)
    1). The orignal 386 and 386sx hardware that linux ran on in the early day's was slow (16 to 20 MHz) and usually with very little memory (Memory was very expencive in those days, when I got my first 386 board, not cheap BTW, just the 4mb of memory cost more than the rest of the computer) so the slightly better performance of a monokernel counted.

    2). Writing a simple monolithic system is easier than writing a simple microkernel one (but a complex MK should be easier than a complex mono one, unless it's the original HURD of cource ;-)

    3). Reading Linuses original posts on the subject (at the time, I have not reread them since) it quickly came apparent that he had not done any OS/systems development on anything but an intel processor, intel x86 (and clones) context switch much slower than for instance a IBM PPC so a microkernel system takes a much higer performance hit on an intel based system than it does on most other processors. OS-9 for instance feels fairly sluggish on an PC compared to some of the other implenmentations. (IBM claims that a PPC performs context switches 10x times faster than an otherwise similarily powerful intel chip, that figure is probably marketspeak but even if only 5x it would mean a lot to a MK performance)

    4). Almost all advanced OS design and system programming books/coursewar available at the time dealt with monolithic systems and while most CS peopleetc. had heard the word microkernel they had no real idea what it meant, even today most people have no practical or even theoritical knowledge of a MK system as can be witnessed by those silly WinXP is microkernel" slogans that crop up here from time to time.

    5). Linus cannot admit being wrong ? Recent posts to AFC would indicate so, but doing pshycoanalysis based on usenet postings is probably not in the best of taste.
  • Re:Multiply WHAT? (Score:3, Informative)

    by BigFire ( 13822 ) on Friday May 21, 2004 @10:12PM (#9222375)
    Ken Brown claimed that Tanenbaum's publisher lost about 500 books of sale per year due to the advent of Linux. And at $100 a pop, that comes to $1,000,000 (Ken Brown's figure)...

    Well, given that the number isn't exactly right (the book don't cost $100), here is the calculation:

    $100 x 500 x 14 (years) = $700,000. Lets round up that number.
  • by Anonymous Coward on Friday May 21, 2004 @10:32PM (#9222484)
    The biggest practical reason is that there is very little distinction, algorithmically, between micro kernels and monolithic kernels. Message passing is just function calls waiting in a list to be called in a separate process, instead of immediately transferring to the kernel to process the function call (essentially). A monolithic kernel cuts out the message passing overhead. The other practical reason for a micro kernel is that it separates drivers from the kernel. But in some cases, it has no direct benefit. For closed source drivers, it is good, because a driver can't subvert the whole operating system. For open source, that's a lesser problem. In the case of buggy drivers, there's no *practical* reason to separate them, unless they are of no real importance. If your hard disk driver or DMA driver fails, it doesn't matter whether you have a monolithic kernel or a microkernel, you're screwed. Incorrect data will end up in the kernel or other drivers and crash the system, or it will hang. For very secure and reliable systems, a microkernel can provide assurance against even these kind of failures; but not general purpose operating systems at least without a *practical* difference between "Hard disk driver cannot execute, please restart" and "Kernel panic!" Another big argument for microkernels is their modularity, but modularity can be implemented in a monolithic kernel as well, as evidenced by linux kernel modules and the virtual file system that allow dynamic linking of objects into the kernel and operating system.

    The point is, what a micro kernel enforces automatically can be done manually in a monolithic kernel, but faster. The problem is that, not being automatic, more problems are possible, and more work has to be done. It's a tradeoff, and with the MHz and cost wars, tradeoffs generally favor speed.
  • by Wumpus ( 9548 ) <IAmWumpus@gmail. c o m> on Friday May 21, 2004 @10:39PM (#9222527)
    What are you talking about? Windows isn't a microkernel, loadable modules have nothing to do with microkernels, almost all device drivers in a modern Linux distributions are loadable modules, and while Tannenbaum may be right, it's not for any of the reasons you mention.
  • K42 by IBM (Score:2, Informative)

    by Szplug ( 8771 ) on Friday May 21, 2004 @10:59PM (#9222634)
    Is a NUMA OS (eg think 256 processors with their own memory) but is meant to work on a uniprocessor as well. I mention it because it's microkernel-ish and being developed by IBM.

    K42 link [ibm.com]

  • Microkernel reality (Score:5, Informative)

    by Animats ( 122034 ) on Friday May 21, 2004 @11:30PM (#9222807) Homepage
    For starters, I'm reading this discussion on a computer running a microkernel. This machine is running QNX 6.2 [qnx.com] on a Shuttle 1.5GHz AMD desktop box. The browser is Mozilla 1.6, running under the QNX Photon GUI. It runs about as well as the same version of Mozilla on a comparable Windows machine. Even the same Mozilla bugs show up.

    The file systems and networking are user programs. You can add new file systems; there's one that mounts .zip files, there's NFS, and there's Samba. In Linux terms, visualize a system where there's the /proc file system for inter-program communication, and everything works through that mechanism.

    The drivers really are outside the OS. I've written a FireWire camera driver for QNX, and it's a user program. It's privileged in that it does map some real memory shared by the device, and it can talk to the device directly, so it could potentially cause a crash by making the device write someplace it shouldn't. (That's really a weakness in the PC's I/O architecture; there's no MMU between devices and memory, for historical reasons dating back to the original IBM PC.)

    Debugging a driver is like debugging a normal program. You can even run a driver under a debugger. You can kill a driver while it's running, and it's no big deal. (If you have real memory mapped, it's not recovered until the next boot, so I had to restart my machine about once a week while doing driver development.) Mainframe people have been doing this since the 1960s, but it's rare on PCs.

    The basic penalty for using a microkernel is one extra copy and context switch for every file system operation. If your system is doing anything besides I/O, you'll probably never notice. If you're running a web server that serves mostly plain pages (little Perl, Java, PHP, etc.), you'd probably notice the overhead.

    So why are microkernels so rare? They're hard to write well. You can't just hack them together like a UNIX clone. There are some tough design problems to be solved. If those are botched, message passing performance will be terrible. Message passing and CPU scheduling need to work together. This forces certain design decisions in the scheduler. It's also why adding message passing to an existing system tends not to work well. The Hurd crowd has been thrashing on this issue for a decade. I would have loved to see something as good as QNX from the Hurd people. But it didn't happen.

    Mach didn't really work out as a microkernel. Mach started from 4.3BSD (considered bloated in its day), and versions of Mach below 3 had 4.3BSD in the kernel. MacOS X is not a microkernel system; the BSD stuff is in the kernel. Basically, retrofitting a microkernel architecture to an existing UNIX kernel didn't work.

    What you do get from a microkernel like QNX is predictablity. The kernel changes very little and is very reliable. Good microkernels, like QNX and IBM's VM, settle down into versions that almost never change and have very long MTBFs. This brings down total cost of ownership.

  • Re:Little Help? (Score:5, Informative)

    by JPriest ( 547211 ) on Friday May 21, 2004 @11:30PM (#9222810) Homepage
    Linux was written on Minix and based partly on it (and partly on many other operating systems). But Linux does not and never did contain any Minix source code. For one Linux is a monolithic kernel and Minix is a microkernel. Not just different code but entirely different implementations. I highly recomment reading a usernet post from Andy Tanenbaum in 1992 titled "LINUX is obsolete" [google.com] and the corresponding response from Linus.
  • by niew ( 133188 ) on Friday May 21, 2004 @11:43PM (#9222881)
    If you want successful microkernels, look at NT and Darwin

    Hmmm, No...

    It's a widely perpetuated myth that NT is a microkernel. It may have started out that way, but has long since grown through millikernel, centikernel, decikernel to full blown kernel... (and beyond if you count browser, media player and kitchen sink OS embedding)

    The linked letter from Prof Tanenbaum touches on this point too... He says:

    Microsoft claimed that Windows NT 3.51 was a microkernel. It wasn't. It wasn't even close. Even they dropped the claim with NT 4.0.
  • by Roger W Moore ( 538166 ) on Friday May 21, 2004 @11:44PM (#9222891) Journal
    Your mistake is to imagine that moderators read at threshold 0. Neither have they read the moderating faq, and frequently not even the article.


    Well actually if you had read the moderator FAQ you would know that you should really browse at -1 in order to catch abuses.

  • Re:Changed opinion (Score:5, Informative)

    by nathanh ( 1214 ) on Friday May 21, 2004 @11:56PM (#9222947) Homepage
    But it should not be ignored that you can now add filesystems to a running kernel as modules, and even build them outside of the kernel tree. At this point, Linux is essentially a microkernel design running as a monolithic kernel for performance reasons as an implementation detail.

    Dynamically loadable modules does not make Linux a microkernel design. It would only be a microkernel if the filesystem code ran in a different address space. But because ext3.o runs in the same address space as the kernel, it is most definitely a monolithic design. It is not a "microkernel design running as a monolithic kernel". That's just a nonsensical statement.

    A future version could offer the option of running the filesystems in userspace if you want. (That is, running all of the filesystems in userspace with the kernel fs API; there's already support for having filesystems in userspace if you want.) I wouldn't be surprised if people having weird problems would be advised to try the "ext3.userspace" option, and if you could avoid tainting your kernel with "nvidia.userspace".

    You clearly understand that the significant distinction between microkernel and monolithic is the address space for the subsystems. So I can't understand why you'd suggest that kernel modules makes Linux "essentially a microkernel design". Look at the address space for ext3.o; it's kernel space.

    I don't see Linux evolving into a microkernel until there's hardware support for cross address space branching. Don't hold your breath.

  • Re:Little Help? (Score:2, Informative)

    by Halfbaked Plan ( 769830 ) on Friday May 21, 2004 @11:59PM (#9222956)
    One of Microsoft's first products was the BASIC interpreter(s) that just about every early producer of microcomputers included with their machine.

    All the TRS-80 machines included Microsoft Basic in ROM, for instance. The IBM-PC had Microsoft Basic in ROM that you could run without even having a floppy disk controller in your PC.

    Microsoft was an early entrant in the hobby/personal computer market with one of the first significant commercial products that people found useful.

    However, it's more popular to believe the anti-Microsoft drivel and revisionist history written by pundits years later.

    Anybody with a sense of ethics and value for historical accuracy would be ashamed to be involved with the early 'history' part of the film 'Revolution OS' or that Robert X. Cringely (not his real name, just a shared psuedonym he stole control of and commercialized) fraud-history work 'Pirates of Silicon Valley.'
  • Re:Little Help? (Score:4, Informative)

    by cmowire ( 254489 ) on Saturday May 22, 2004 @12:42AM (#9223112) Homepage
    Not really. Linux never had any Minix code. I think you are confusing how Linux 0.1 had no real userland, which meant that you needed to install it over an existing Minix install.
  • Re:Linux is Obsolete (Score:4, Informative)

    by MrHanky ( 141717 ) on Saturday May 22, 2004 @01:37AM (#9223350) Homepage Journal
    I think he would pass, even in the beginning. AST was clearly ironic -- he even put a smiley in his post:
    I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design :-)

    Linus would also get a second 'F' for writing i386 specific code, but that problem is long gone. Are there any other OS, apart from NetBSD, that supports as many architectures as Linux?

    It seems to me that AST was quite impressed with Linux from the start -- especially the Posix compliance -- but disagreed strongly with the design (or lack thereof). You have to remember that many people actually wanted to turn Minix into something like Linux, and that was out of the question. Linux is not as good for what Minix was supposed to do: teach OS principles. If you consider the context of the discussion, AST does not look as arrogant, and certainly not stupid (although his predictions for the future were a bit off target, but I don't think Linus expected his little hobby to be the subject of a multi-billion dollar suit either).
  • Re:I resent that! (Score:2, Informative)

    by AndroidCat ( 229562 ) on Saturday May 22, 2004 @01:40AM (#9223362) Homepage
    Mmm, that's good enough for most purposes, but not quite right. Here's my IANAL understanding of it: Slander is transitory. Libel is in an enduring format. Until the last century or so, spoken vs print was good enough without audio/video recordings, broadcasts, network TV coverage, Usenet archives, IRC logs, etc to confuse the issue.

    And besides Ken, you really work for the Tomás de Torquemada Institute [newadvent.org] who had connections to the Monastery of Santa Cruz! I bet you weren't expecting that!

  • by AJWM ( 19027 ) on Saturday May 22, 2004 @01:56AM (#9223413) Homepage
    If you want successful microkernels, look at NT and Darwin.

    No.

    Others have already explained why those are not examples of microkernels. If you want a real example of a successful microkernel, look at QNX (which is very successful in its target market).

  • by DunbarTheInept ( 764 ) on Saturday May 22, 2004 @02:08AM (#9223460) Homepage
    What the hell kind of microkernel has graphical interfaces in ring zero? NT is certainly NOT a microkernel.

  • by mbanck ( 230137 ) on Saturday May 22, 2004 @04:27AM (#9223785)
    If microkernels are the best approach why is gnu/HURD taking so long?

    Dunno, what does the one have to do with the other? It's like saying 'If macrokernels are the best approeach, why does Windows suck do much?'

    The Hurd is a set of user-space server running on top of a microkernel (currently Mach), together providing the old Unix experience besides other more interesting things.

    Really, it's not taking so long because it was a microkernel (it is NOT!), it's just that there's nobody working on it.

    Michael

  • by TheRaven64 ( 641858 ) on Saturday May 22, 2004 @05:30AM (#9223918) Journal
    Windows is not a microkernel, as other have pointed out. The nicest (actually only, now I think about it) microkernel OS I've used on a desktop is BeOS, an operating system where you can replace the entire TCP/IP stack without a reboot. I haven't used Windows for about a year, but I seem to recall the loading drivers required a reboot. With BeOS, you could replace the entire sound subsystem and not reboot. The same with the graphics subsystem (there were a couple of different drivers available for my graphics card, and I could swap between them on the fly to see which was better. No reboot involved). The nicest thing, however, was the little dialog box that sometimes popped up saying `The sound subsystem has crashed. Restarting...' Ideally, the dialog would not exist at all, and this would all happen without telling the user unless it failed to restart correctly the second or third time. It's a lot better than the alternatives on Window (blue screen) or Linux (kernel panic) when confronted with buggy sound drivers. I agree with Tanenbaum here.
  • Re:Okay. (Score:3, Informative)

    by frost22 ( 115958 ) on Saturday May 22, 2004 @06:03AM (#9223985) Homepage
    Seems to be American, judging by the flag on his photo.
    Ok. One click away from that photo we find Andrew S. Tanenbaum's FAQ [cs.vu.nl]. To quote:
    Your name is German, you live in The Netherlands, but you write almost as well as a native English speaker. What's the scoop?

    My paternal grandfather was born in Chorostkow, currently in Ukraine, historically in Poland, at the time under Austro-Hungarian management. He came to the U.S. in 1914. I was born in New York and grew up in White Plains, NY. I went to Amsterdam as a postdoc and have sort hung around ever since.
    HTH

  • by Anonymous Coward on Saturday May 22, 2004 @06:11AM (#9224001)
    It's all about the interfaces. In a monolithic design, you don't have to specify any; you call kernel functions, set global variables, pass pointers and so on. It's all very permissive and "organic," from a developer's perspective... if something's broken halfway across the kernel, you can rewrite that instead of coding your subsystem to accept it.

    Microkernels demand interfaces of one sort or another, since the idea is that major components should be pushed out to userland. If the interface is permissive enough (the much-loved AmigaOS, for instance, was a 'microkernel' with no concept of memory protection), your system is modular, but your security/robustness gain is nil. (Stability may improve, to the extent that modularity helps write good code... we all know the arguments for it and agaisnt it.) If you start trying to design 'safe' interfaces, things can get complex, and you may take the performance hit.

    A worst case is, perhaps, when you do something insane like Apple Darwin's "xnu," and take a microkernel but implement an entire monolithic kernel beneath it as a 'service.' (I sincerely doubt they'll ever reap benefit from that decision, but you never know.)

    Meanwhile, QNX is a *very* fun system, but it's worth noting that running a nuclear plant or a medical monitoring device is a fairly different problem from supporting a massively-multiuser server. It works great, but some of the last of its five 9s or whatever (referring to the concept of 99.999% availability, or whatever's claimed) comes from the availability of a system watchdog to restart failed services... a microkernel concept, sure, but the rest of the stability is simply for good coding under relatively stress-free environments (nobody'd call QNX the most *secure* OS, and we all know how long a Netware server can survive forgotten in a closet), and the monolithic/Linux attitude is that the service should simply not fail in the first place.

    (What makes QNX a win for control systems is the modularity -- convenient for developers trying to fit a system on 'embedded' hardware; instead of recompiling a kernel, you just make a new filesystem image -- and the 'realtime' scheduling, which gives developers a hope of determinism that's hard to come by with Linux or Windows... the 'kernel' won't unexpectedly block for two seconds to conduct some disk I/O, while with a 'general purpose' solution like Linux, you'd have to do a lot of profiling and rewriting of the system to assure yourself of the same guarantee. (The 'realtime' Linux vendors have done said work, of course... but who'll actually be better-faster-cheaper for a particular implementation is something of a case-by-case issue.)

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...