Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Andy Tanenbaum on 'Who Wrote Linux' 668

Andy Tanenbaum writes "Ken Brown has just released a book on open source code. In it, he claims (1) to have interviewed me, and (2) that Linus Torvalds didn't write Linux. I think Brown is batting .500, which is not bad for an amateur (for people other than Americans, Japanese, and Cubans, this is an obscure reference to baseball). Since I am one of the principals in this matter, I thought it might be useful for me to put my 2 eurocents' worth into the hopper. If you were weren't hacking much code in the 1980s, you might learn something." Tanenbaum's description of the interview process with Brown is classic. See also Slashdot's original story and Linus' reply.
This discussion has been archived. No new comments can be posted.

Andy Tanenbaum on 'Who Wrote Linux'

Comments Filter:
  • I like the last bit (Score:4, Interesting)

    by gowen ( 141411 ) <gwowen@gmail.com> on Thursday May 20, 2004 @09:56AM (#9203408) Homepage Journal
    where, ten years after he first had this argument, he still feels obliged to rag on Linux's design as a monolithic kernel as a bad design decision. This from a man who describes true multitasking and multi-threaded I/O as "a performance hack."

    Bitter much?
  • by 91degrees ( 207121 ) on Thursday May 20, 2004 @10:01AM (#9203455) Journal
    Why are microkernels a bad idea?
  • by MemoryDragon ( 544441 ) on Thursday May 20, 2004 @10:04AM (#9203486)
    Actually, since the interview is not reachable anymore, I assume it is right what you said. But Tannenbaum is right in his sense, a Makrokernel does not really make that much sense, it is easier to program because you simply have method calls instead of messages, but you run into driver compiles, crashes due to the strong binding etc... Mach at least in its early incarnations was not the best example of a Microkernel, neither is the vaporware hurd, which probably will be finished in about 100 years, but improved Mach kernels derived from newer incarnations already have shown how powerful and stable the concept can be, Two words:AIX and MacOSX both based on Mach kernels and both excellent and fast operating systems.
  • by CrypticSpawn ( 719164 ) * on Thursday May 20, 2004 @10:06AM (#9203515)
    Okay the real father of dos and windows is here [diracian.com]
  • Devil's advocate (Score:2, Interesting)

    by A nonymous Coward ( 7548 ) * on Thursday May 20, 2004 @10:07AM (#9203520)
    Well, not quite advocate ... I can't get to the interview, it's acting ...slow ... for some reason.

    But I had the distinct impression from other quotes about the AdTI book that IFFFF you go with the idea that everything which looks like UNIX was inspired by UNIX, THEN it is not that big a stretch to say that Linux was not invented but is just a copycat derivative, therefore Linus did not invent Linux, he merely wrote it. Like I said, I have not seen the actual book (of course) or interviews, but maybe that is the quibble they are going with, a sock puppet for SCO, who is in turn Microsoft's own sock puppet.

    And for all you bozoes who think this means I believe it, well, ha ha, yolk's on you for having no imagination in your narrow little cross-eyed brain.
  • by solidhen ( 642119 ) on Thursday May 20, 2004 @10:08AM (#9203540)
    if one is a macrokernel and the other a micokernel?

    I don't think Tanenbaum is bitter. He just wants to point this out.

  • Oh the irony. (Score:4, Interesting)

    by mumblestheclown ( 569987 ) on Thursday May 20, 2004 @10:12AM (#9203583)
    Of course Linus wrote linux.

    right?

    But who wrote the version of Basic that started bill gates on his path to riches?

    The answer of course is that every creative engineering endeavour builds upon what came before. the detractors will call the step that the developer in question took as derivative, obvious, insignificant, or larcenous. the supporters will shine light upon the principal's ability to fuse diverse, unfocused, and/or unapplied parts into a cohesive whole.

    to mis-quote grandpa simpson, 'the fax machine isn't anything more than a waffle iron with (something or other that i forgot).'

    so, the question is really this: those of you who accuse (probably correctly) whoever is claiming that linus didn't write linux of spreading FUD, have you ever written a similar post smearing gates on basic? pot kettle?

  • The plot thickens (Score:5, Interesting)

    by DreamerFi ( 78710 ) <johnNO@SPAMsinteur.com> on Thursday May 20, 2004 @10:13AM (#9203597) Homepage
    Take a look at this post [google.com] on alt.os.development:

    Greetings,


    I'm conducting some research on behalf of the Alexis de Tocqueville
    Institution in Washington, DC. I'd like if someone could shed some
    light on the following questions:

    1. Describe the components of an operating system, besides the central
    component, the kernel.
    2. What do programmers usually develop first, the compiler or the
    kernel?
    3. Does this sequence impact the OS at all?
    4. What's more complicated, the kernel or the compiler?
    5. Why does operating system development take as long as it does? What
    are the three key things in operating system development that take the
    longest to perfect?
    6. Do you need operating systems familiarity to write a kernel? Yes /
    no? Elaborate please.
    7. In your opinion, why aren't there more operating systems on the
    market?

    Thanks for your time. Best,
    Justin Orndorff


  • by mr_z_beeblebrox ( 591077 ) on Thursday May 20, 2004 @10:23AM (#9203678) Journal
    Can't argue with that. I can't read the article, but I don't think that Tannenbaum argues with that either. He probably said something like Linus copied the basic Minix design (true) and the reporter blew it out of proportion.

    Worse, he said "Linus wrote Linux as far as I know" Then when the moron said that One person couldn't possibly write a kernel, Andrew listed six other examples (including himself) showing that one person could do that. Remember reading (the post) is fundamental.
    How is that for a grain of salt?
  • by IamTheRealMike ( 537420 ) on Thursday May 20, 2004 @10:24AM (#9203684)
    I must have missed the part where the MacOS X kernel was fast. I believe when Miguel de Icaza first compiled Mono on an Apple laptop which was faster than his desktop PC, the compile took almost twice as long (don't have a citation, it was a blog entry some time ago).

    Basically, Mach has had severe performance problems especially with things like file I/O, which they are now "fixing" the same way Microsoft did with NT, by moving some servers in kernel.

  • QNX (Score:5, Interesting)

    by RAMMS+EIN ( 578166 ) on Thursday May 20, 2004 @10:25AM (#9203698) Homepage Journal
    Add QNX to that. It doesn't get much more microkernel than that, and I think noone would argue that QNX is slow.

    As for Darwin; it was certainly slow on my x86 laptop, but it's not lacking any speed on my iBook. I guess that says something about the quality of the x86 port (hint: there is no such thing).

    Poor Andy seems a bit too stuck in his I am right and everyone who disagrees is wrong. I have a book here (Distributed Systems: Principles and Paradigms) in which he claims that a 20% performance loss is not so bad, in exchange for all the benefits a microkernel brings. I most sincerely think that is a ridiculous statement, but fortunately, it doesn't have to be that way. I believe microkernels need not incurr any significant performance penalty at all.
  • by Tarantolato ( 760537 ) on Thursday May 20, 2004 @10:26AM (#9203707) Journal
    Treating the micro v. monolithic debate as a solved problem ("microkernels win!") is as idiotic as suggesting that object orientation is the ideal solution to all programming problems.

    Apparently, the really trendy kids have decided that microkernels themselves are obsolete, and moved on to something called exokernels [c2.com]. I can't pretend to understand the distinctions involved.
  • On second thought (Score:3, Interesting)

    by argoff ( 142580 ) on Thursday May 20, 2004 @10:31AM (#9203749)
    After I saying that, I was thinking, perhaps Linus should be discredtied - not in reguards to being the father of Linux, but for being so neutral about freedom.

    When I think of Linus, I think of Charles Lindburg - he was prestigious hero, and a brilliant person. But when he went on a visit to Germany, all he could see was the armada of beautiful planes, and the amazing technology. He came back to the USA, and proclaimed that we shouldn't fight Hitler. The point being, Lindburg was a brilliant hero who was right about technology, but dead wrong about the importance of freedom - his stance didn't help him or us. This is how I feel about Linus, I'm thankfull he put Linux under the GPL, and that he's responsible for bringing us this cool technology - but I think his casual/neutral attitudes about freedom really suck, and in the long term will cause alot of uneeded harm. The goal shouldn't be to win a popularity contest, or to fit in, but to secure our freedoms in the technology space. Heck, that's the force that made Linux grow so much compaired to other alternatives to begin with.

    What amazes me is how so many people feel that there is nothing wrong with having a technology bias, but then these same people turn arround and think you're a fruit for having a freedom bias. But political freedom isn't voodoo, it's provven itself more than enough - it's not just an opinion or wishfull thinking. I just can't understand why so many people who should know better seem simply ashamed of it.
  • OS X (Score:5, Interesting)

    by ink ( 4325 ) * on Thursday May 20, 2004 @10:32AM (#9203754) Homepage
    To call OS X a Mach system is a bit disengenious. All I/O operations are handled by the "BSD Subsystem" for performance reasons. This means that all file and network I/O (along with the file security descriptions) are in a "monolithic subsystem" of the uK. Needles to say, this is the most performance-intense section of a UNIX (any?) system. A lot of the message-passing is therefore avoided; and the performance costs that those message passes would incur. Take a look at this url: OS X System Overview [apple.com] See that dotted-line that stretches from the kernel to userland? Tannenbaum would not approve.
  • by holviala ( 124278 ) on Thursday May 20, 2004 @10:33AM (#9203760)
    improved Mach kernels derived from newer incarnations already have shown how powerful and stable the concept can be, Two words:AIX and MacOSX both based on Mach kernels and both excellent and fast operating systems.

    I have no idea about OSX, but calling AIX fast is about the same as calling Windows user-friendly. Most people think it's like that but once you actually try do do something the truth reveals itself.

    I used to run my own server on AIX on a pretty fast RS/6000 and it was rediculous. I finally replaced it with a slow Duron with Linux which ran abour orders of magnitude faster (and cost about 1/20 of the RS). I also benchmarked Linux on the same RS6k and got 2x the performance compared to AIX on compilations.

  • Hard to refute (Score:2, Interesting)

    by segfault_0 ( 181690 ) on Thursday May 20, 2004 @10:39AM (#9203846)
    Its hard to refute Andy Tanenbaums claims for a number of reasons; his background, his education - basically his history. But to another end, you cant even view Browns document to contrast - they have it password protected.

    Putting a story at the top of the front page of your website but making it password protected while no other storys on your frontpage are similarly protected smacks of gun for hire/skewed view/anything for money. Appears to me they want you to read the headline but not read the text, sound familiar?

    These adti individuals dont appear to be concerned with their own credibility and i hope whatever they are being paid is worth it.
  • by Brackney ( 257949 ) on Thursday May 20, 2004 @10:46AM (#9203915)
    This simply confounded me...

    "Finally, Brown began to focus sharply. He kept asking, in different forms, how one person could write an operating system all by himself. He simply didn't believe that was possible."

    How many people does Brown thing wrote the original version of DOS? Microsoft wouldn't have gotten their big break were it not for Tim Paterson's SOLO coding effort.

    Great article though...
  • Re:On second thought (Score:5, Interesting)

    by Cytlid ( 95255 ) on Thursday May 20, 2004 @10:47AM (#9203930)
    This is how I feel about Linus, I'm thankfull he put Linux under the GPL, and that he's responsible for bringing us this cool technology - but I think his casual/neutral attitudes about freedom really suck, and in the long term will cause alot of uneeded harm. The goal shouldn't be to win a popularity contest, or to fit in, but to secure our freedoms in the technology space.


    In one of the "Revolution OS" interviews, he states he's merely the engineer, and RMS is more like the philosopher. I think he wants to remove himself from the political aspect and just enjoy the work. Think Einstein and the atomic bomb.
  • Malicious intent (Score:5, Interesting)

    by mst76 ( 629405 ) on Thursday May 20, 2004 @10:55AM (#9203990)
    After reading this, the Tanenbaum interview and this [linuxinsider.com], there is little doubt that of Brown and the AdTI were determined in their slander campain against Linux from the start. From the AST interview, it is clear that he is just fishing for incriminating quotes. It is well known that initial Linux development took place on (and was inspired by) Minix. With selective quoting, it's likely that he will have AST seemingly accusing Linus of stealing Minix. One of his more persuasive arguments to the laymen will be that it took the highly experienced professor Tanenbaum years to develop Linux, while kid Linus hacked his OS together in 6 months. Of course, he knows this is not a truthful representation, but that doesn't matter as long as it will get him headlines. We (and AST it seems) may regard people like Brown and McBride as dumb and ignorant. But we should beware, these people are of a kind that we do not encounter often day-to-day: people with malicious intend.
  • by Cytlid ( 95255 ) on Thursday May 20, 2004 @10:57AM (#9204005)
    I read the interview and followed the other two articles, the original and Linus' reply. I think this stuff is a must-read. [conspiracy theory] Perhaps someone, somewhere funding anti-Linux FUD is realizing the whole SCO thing is falling through? [/conspiracy theory]. It outlines the differences between what Linus believes in and what Tanenbaum believes in. The more articles I see like this, the better I feel. There's a lot of people out there who know the truth, and I don't care how many billions you have, your underhanded sneakiness will be revealed. This whole situation has been the posterchild for Open Source.
  • Re:history... (Score:3, Interesting)

    by BerntB ( 584621 ) on Thursday May 20, 2004 @10:58AM (#9204017)
    Whi is writing the history? the winner or the who loose?
    In almost all searches for truth, you will not reach it. But you can get more or less close.

    Yes, you can't reach perfect models of reality in multitudes of area (history, physics, etc), but to claim that they are all bunk is a bit overblown.

    It's like saying that since we can't freeze anything to 0 degrees Kelvin, that all termometers are worthless. Physicists can get very, very close to absolute zero -- and measure how far we are.

    (On topic, I loved Tanenbaum's books when I studied comp sci and would have lots of them, if people returned all things I've lent them the last 20 years... :-) )

  • by Derek ( 1525 ) on Thursday May 20, 2004 @11:00AM (#9204034) Journal
    What does this quote reference?

    "Some of you may find it odd that I am defending Linus here. After all, he and I had a fairly public "debate" some years back. My primary concern here is getting trying to get the truth out and not blame everything on some teenage girl from the back hills of West Virginia."

    Just curious...
    -Derek
  • by chez69 ( 135760 ) on Thursday May 20, 2004 @11:02AM (#9204060) Homepage Journal
    What a bunch of crap. how old was the RS/6000 and what was the model? how much memory did it have?
  • by Tarantolato ( 760537 ) on Thursday May 20, 2004 @11:06AM (#9204097) Journal
    He also says that Brown, the person who interviewed him, was completely clueless and obviously pushing an agenda.

    I was semi-surprised at how retarded Brown came off in the article. I mean, everything that's come out of his "institution" would lead one to expect that, but somehow I thought maybe it was just an act for the punters. Turns out he really is that dumb. Weird.
  • by arkanes ( 521690 ) <arkanes@NoSPam.gmail.com> on Thursday May 20, 2004 @11:21AM (#9204270) Homepage
    I'm aware of that and it's addressed in the flamewar. The freedom of Linux is one thing that allowed this - if Tannenbaum (and PH) and allowed forks, then Minix could have had all this without compromising the educational aspects. Further, today, all that stuff IS on low end student hardware and yet Minix doesn't have it.
  • by Anonymous Coward on Thursday May 20, 2004 @11:31AM (#9204384)
    http://groups.google.com/groups?dq=&start=75&hl=en &lr=&ie=UTF-8&group=linux.samba&selm=1Xf0Z-7VF-11% 40gated-at.bofh.it

    From: Justin Orndorff (raison__d_etre@hotmail.com)
    Subject: [Samba] research inquiry
    This is the only article in this thread
    View: Original Format
    Newsgroups: linux.samba
    Date: 2004-05-18 08:50:10 PST

    Greetings,

    I'm currently doing research into corporate contributions towards open
    source projects, such as Linux. One of the recent Credits Files lists Mr.
    Anton Blanchard as a contributor. Is Mr. Blanchard still an employee with
    the company?

    Also, does the company have any policies regarding open source contributions
    by employees? If so, are there any differences between on and off the clock
    contributions?

    Thanks very much for your time and apologies for posting on your mailing
    list. I did not find any other contacts on your website related to this side
    of your business.

    Best,
    Justin Orndorff
  • Re:Oh the irony. (Score:1, Interesting)

    by Anonymous Coward on Thursday May 20, 2004 @11:33AM (#9204412)
    Actually, it is my understanding that most of the real work on Altair BASIC was done by Paul Allen, and it was inspired by work that had been done on DEC BASIC during an internship at Digital while he and Gates were students. It is also well known that Gates 'borrowed' computer resources from Harvard for his own commercial interests in starting Micro-Soft (as it was then known). So if you buy Ken Brown's arguments, then Bill Gates and Microsoft are guilty of the same thing that they are accusing Linus of.
  • Kenneth Brown (Score:3, Interesting)

    by Seanasy ( 21730 ) on Thursday May 20, 2004 @11:37AM (#9204445)

    He's also affiliated with Democratic Centruy Fund [dcfund.net]:

    We offer the opportunity for competitive rates of return on investment, but can manage funds only for qualified individuals and institutions who appreciate the potential and risks inherent in dealing in these markets.

    Because we protect the confidentiality of our investors, and our own methods, we can make only certain information about our firm available on the internet, through the operating links below.

    Sounds like "If you've got a lot of money that you can afford to lose and don't ask too many questions, invest with us." Most of the links on that site are broken, too.

    It always amazes me that people have the nerve to pull stunts like this guy does. I just have to believe that his Karma will catch up with him eventually.

  • Re:Article text (Score:4, Interesting)

    by RAMMS+EIN ( 578166 ) on Thursday May 20, 2004 @11:50AM (#9204613) Homepage Journal
    ``Windows IS a micro-kernel based OS''

    That may be true if you look only at the kernel proper, but I'm willing to bet that there are "userspace" processes that have kernel access to an extent that makes the system actully megakernel.

    Arguing your comments about microkernels:

    I don't believe microkernels are any more secure or insecure, than macrokernels. As for stability, I would rather expect microkernels to be better there. The added flexibility is immense, and has real world uses.

    Personally, I favor a system where nearly everything resides in userspace. It might even be feasible to make the system so flexible that different users or even processes can use different memory managers, schedulers, etc. There are a number of OS projects that explore in this direction, but I don't think we're quite there yet.
  • Re:QNX (Score:4, Interesting)

    by JesseL ( 107722 ) * on Thursday May 20, 2004 @12:03PM (#9204815) Homepage Journal
    If you'r using dd to copy a partition and you find it slow, how cold you possibly attribute it to the filesystem? Doesn't dd just do a raw copy and ignore the filesystem in a case like that?
  • Re:OS X (Score:4, Interesting)

    by Wesley Felter ( 138342 ) <wesley@felter.org> on Thursday May 20, 2004 @12:14PM (#9204964) Homepage
    Yes, Apple made a big mistake with the Darwin kernel. They get none of the advantages of a monolothic kernel (speed) and none of the advantages of a microkernel (isolation). They probably would have been better off just throwing away the NeXT kernel and porting the FreeBSD kernel to PPC.
  • by Anonymous Coward on Thursday May 20, 2004 @12:33PM (#9205232)
    Wtf!?

    Why introduce a new name at all?

    and you do are aware of SGI's 3D infrastructure in X11 named GLX? also, X.Org and everyone else strongly discourage people to name X "XWindows", it's X or "The X Window System". Also there are production embedded Linux systems without GNU (my VoIP modem for example), and less embedded systems without X (my router for example)... All linux are not Mandrake.

    end rant
  • Re:Article text (Score:5, Interesting)

    by CrayzyJ ( 222675 ) on Thursday May 20, 2004 @12:35PM (#9205255) Homepage Journal
    `Windows IS a micro-kernel based OS''

    No no no. Windows is a monolithic kernel. Using Andy's defintions, the drivers run in kernel space; thereby, making it monolithic.

    "but I'm willing to bet that there are "userspace" processes that have kernel access to an extent that makes the system actully megakernel."

    Nope, this is false. In this sense, Windows NT (not counting 9x/Me because they suck) is identical to Linux. User space processes require system calls and a kernel crossing to have access to any kernel services.

    "I don't believe microkernels are any more secure or insecure, than macrokernels."

    I strenously disagree. In both Linux and Windows NT, 80+% of crashes are due to problems in drivers. There is a ton of research to back this up (Engler et al, I think). If the drivers existed in user space, a la microkernels, then 80+% of crashes would just disappear. Most people who complain about Windows' crashes do not realize the driver writers are to blame. Yes, Linux will have the same problem as drivers are ported. This also, has been pointed out in much research. THIS is the very reason why Andy dislikes monolithic kernels.

  • Tanenbaum is right (Score:1, Interesting)

    by Anonymous Coward on Thursday May 20, 2004 @12:35PM (#9205266)
    In my opinion, Tanenbaum is right in his discussion about the micro vs. monolithic kernel issue. This doesn't mean Linux is a mess, but it could have been better. The information for doing so was available when Linux was developed by Linus. Don't get me wrong, I never wrote an entire operating system like Linux or Minix, but my point is that the knowledge was available and it was not used in the (initial) development of Linux. Perhaps Linus is really convinced that monolithic is preferred, but that is not enough to debunk the reasoning of Tanenbaum concerning microkernels.
  • by buzzdecafe ( 583889 ) on Thursday May 20, 2004 @12:41PM (#9205342)
    Treating the micro v. monolithic debate as a solved problem ("microkernels win!") is as idiotic as suggesting that object orientation is the ideal solution to all programming problems.

    Tell that to Tanenbaum:

    From: ast@cs.vu.nl (Andy Tanenbaum)
    Newsgroups: comp.os.minix
    Subject: LINUX is obsolete
    Date: 29 Jan 92 12:12:50 GMT

    ". . . While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won . . ."

    Cited from here [oreilly.com]
  • by Anonymous Coward on Thursday May 20, 2004 @12:56PM (#9205526)
    I think that clumsy line is the main reason I gave up watching B5.

    Well, if that's the reason you gave up on B5, you gave up mighty early - that line was only present in the opening of the first season. Every season of B5 has a different opening blurb, and that line was dropped (along with most of the rest of the Season 1 blurb) at the opening of Season 2.
  • Re:Fighting features (Score:2, Interesting)

    by Trepalium ( 109107 ) on Thursday May 20, 2004 @01:14PM (#9205777)
    I think you have apple's OS design completely backwards. Old Macs were completely monolithic (not unlike their hardware), and virtually everything ran in the same address space. Mac OS X changes things by running the microkernel Mach with the monolithic BSD-derrived Darwin kernel under that. The real reason Apple switched over was the simple fact there were too many design limitations in the original MacOS, and it was too difficult to tack on the features people expect these days, like separated (protected) address spaces, fully preemptive multitasking, and multi-user capabilities. All those features could've been grafted into the original MacOS kernel, but the engineers must've decided that it would be easier to graft MacOS onto a modern kernel rather than the other way around.
  • Re:OS X (Score:3, Interesting)

    by PCM2 ( 4486 ) on Thursday May 20, 2004 @01:18PM (#9205846) Homepage
    All I/O operations are handled by the "BSD Subsystem"
    If that's true, then why do you see so many Readme notices explaining that to install certain software, you need to be running Mac OS X 10.x with the BSD subsystem installed? In fact, I never understood that qualification -- I thought it was installed by default?

    (I guess the easy answer is that you need to install BSD ... to run all your zombie processes, har har har!)

  • by PCM2 ( 4486 ) on Thursday May 20, 2004 @01:31PM (#9206040) Homepage
    ...I do not think it means what he thinks it means.
    While this not free software in the sense of "free beer" it was free software in the sense of "free speech" since all the source code was available for only slightly more than the manufacturing cost.
    How does selling me low-priced goods give me the right of free speech?
  • by Nurf ( 11774 ) * on Thursday May 20, 2004 @02:08PM (#9206556) Homepage
    Treating the micro v. monolithic debate as a solved problem ("microkernels win!") is as idiotic as suggesting that object orientation is the ideal solution to all programming problems.

    I'll agree with that. However, I can say that for the stuff I do, microkernels win. I've written a microkernel RTOS for an embedded system, and it had the following advantages for me:

    1) It was easy to write. (Very modular)
    2) It is easy to maintain. (Very modular, and because all interaction is done with messaging, you dont worry about the code you are doing now interacting in some unknown way with something else. ie. You can ignore the rest of the system except for the messages you send to it)
    3) It is easy to give it strong deterministic real-time response. This is a big thing for me in the applications I use it for. Data ends up flowing from one task to another, and I just have to make sure my scheduler doesn't mess that up.
    4) The overhead introduced by message passing was negligible (The RTOS was implemented to replace an existing system, and did so comfortably)
    5) Its really easy to make stable and reliable systems, because everything is chopped up into small well understood sections with well defined interaction between sections.

    Microkernels might have slightly lower throughput than monolithic macrokernels, but I am not running a batch transaction processor.

    For desktop use, I want controlled latency and reliability. I don't feel that Linux gives me all that it should in those departments, though I use it because it is better than most (with some patching). Kernel modules feel like the worst of both worlds to me.

    So, given the priorities I stated above, I think I would prefer a microkernel OS, all other things being equal. I'd jump ship from Linux to one, if most other things are equal.

    Other people will have other priorities and I encourage them to use whatever works for them.
  • by Sunda666 ( 146299 ) on Thursday May 20, 2004 @02:14PM (#9206647) Homepage
    To be 100% fair, it never was a microkernel anyway. But in the 3.x days the NT kernel was small enough to pass as a microkernel (falsely, eh, based on the sheer size/complexity alone). But GUI performance sucked, and had some issues (regular users being unable to change the desktop resolution due to the lack of priviledges comes to mind), and since windows is all about the GUI, they decided to put the GUI (and a lot of other drivers too) in kernelspace. So, now it is not a microkernel, and the kernel is not small anymore.

    Peace.
  • by Sunda666 ( 146299 ) on Thursday May 20, 2004 @02:19PM (#9206729) Homepage
    No it is not. It may be a "microkernel in design" as you say, but it is not "microkernel in implementation". If it was a real microkernel, some random shitty soundcard driver would not be able to crash the entire OS, like it usually happens. And of course it would not have the video/audio/3d performance it has today. True microkernel == too much context switching, which is a show-stopper (at least in x86)

    peace.
  • by prockcore ( 543967 ) on Thursday May 20, 2004 @02:37PM (#9207002)
    I think they are. A 10% hit in performance is going to get eaten up as hardware gets better and faster. But that 50% increase in manageability and flexibility is going to pay dividends well into the future.

    Who's saying anything about a 10% hit in performance?

    Here are some benchmarks between Panther and Yellow Dog Linux on a Dual 1.25 ghz G4.

    Unix Bench Scores (bigger is better)
    for Linux: 316.4
    for Panther: 131.0

    lmBench Fork in microseconds, smaller is better)
    for Linux: 352
    for Panther: 1402

    lmBench TCP Latency (in microseconds, smaller is better)
    for Linux: 46.3
    for Panther: 76.8

    lmBench Pipe performance (in MB/s, bigger is better)
    for Linux: 419.0
    for Panther: 216.0

    We're talking about huge performance issues across the board. Everything from process construction and destruction to context switching to communication latencies. OSX is much much slower. I attribute this entirely to the Mach Microkernel.
  • by denlin ( 733557 ) on Thursday May 20, 2004 @02:44PM (#9207083) Journal
    about 8 years ago, while still an undergrad, we had an os theory class where over the period of the semester implemented crude implementations of the various parts needed for a primitive os. it worked, albeit not wonderfully, but we were all quite proud with ourselves at the end of the semester. learned more in that one class, than nearly the rest combined.
  • by doug_wyatt ( 532721 ) on Thursday May 20, 2004 @02:52PM (#9207215)
    Actually, with most of the exokernels that actually existed in the wild, even things like the filesystem and the network stack existed within the process of the application using it. For apps that shared the same implementation, they'd use a shared library - a purely performance hack so that the text segments didn't use up lots of memory. But each app could chose to implement their own version. You could think of an Exokernel as a monolithic kernel except that the abstractions that the kernel present to the process are much lower level (e.g. disk blocks and packet filters) than with traditional monolithic kernels (e.g. files and sockets). The main premise of Exokernels was that OS's have traditionally, secured hardware resources, multiplexed hardware resources and abstracted hardware resources - the first two are good ideas, but the last one is better left up to the application since it knows what it's doing with them.
  • by holviala ( 124278 ) on Thursday May 20, 2004 @03:20PM (#9207581)
    What a bunch of crap. how old was the RS/6000 and what was the model? how much memory did it have?

    It was a 43P-140 with a 332MHz 604e, half a gig of memory. Of course it's slow now, but back then 604e's were standard RS6k CPU's and the 332 was the fastest of them (I know, I worked with SP clusters @ IBM at the time and ALL had 604e CPUs). The Duron was... umm... a 600MHz one with 128 megs of memory (and yes, a few years younger than the RS6k). With the RS/AIX my load was around 40 every day and the machine was unusable - with the Duron the loads were under 1.

    I don't know what you do to your AIX machines for them to be fast (neither do the guys @ IBM), but even the POWER4's that I administer now are slow. The floating point performance is excellent, but that's about it. They're no use for any real work - even the Sparcs we have are faster (and they're pathetic too).

    Me - I switched to x86 hardware with Linux and haven't looked back.

  • by __aalomb7276 ( 18802 ) on Thursday May 20, 2004 @03:42PM (#9207871)
    The comment on Ashcroft is out-of-place in an otherwise blunt article. That snipe and the cocky back-patting smack of elitism.

    I support the Patriot Act because it works. Furthermore, nobody has shown that it has been abused since it became law. Tanenbaum needs to be a little more levelheaded about American laws.
  • by crucini ( 98210 ) on Thursday May 20, 2004 @04:12PM (#9208254)
    Could you write a "microkernel" that becomes a macrokernel if compiled with certain parameters? Message passing would be done via a macro which can become simply a function call.

    Then kernel developers can run the "micro" version and have the freedom to easily hack parts while the system is running. But production machines could run the "macro" version.
  • by LWATCDR ( 28044 ) on Thursday May 20, 2004 @04:27PM (#9208439) Homepage Journal
    Tanenbaum is a great teacher. And if it was not for Minix there might not have been a Linux.
    The problem is that Tanenbaum does not seem to understand that the world outside of universities has a different view of superiority.
    The x86 ISA sucks. The Alpha, 68k, spark, Arm, and powerPC where and in some cases are much better ISAs. But the Intel ISA won. More people use it and it gives a HUGE bang for the Buck!
    Tanenbaum was to interested in Minx as a teaching tool and in "pure" OS design. Linux at the time wanted more bang for the buck. It might be hard to believe but Linux used to only run on the 386 family of chips. If I remeber correctly Linux was supposed to be PC only OS.
    Tanenbaum may be right. Of course there are also things like Plan 9. Frankly I am a little tired of Unix/Linux being the best OS. I want to see some new ideas in OSs and GUIs. Being a big OOP fan I would like to see an OO OS. Maybe I will start writing my own OS like Linus and Tanenbaum did. Odds are I will develop it on Linux :)
    Thanks Mr Tanenbaum and thanks Linus.
  • by _Sprocket_ ( 42527 ) on Thursday May 20, 2004 @04:57PM (#9208842)


    Think someone is still sore about Minix's destiny compared to Linux's?


    Hardly - if you can believe what he says. For example:

    The reason for my frequent "no" was that everyone was trying to turn MINIX into a production-quality UNIX system and I didn't want it to get so complicated that it would become useless for my purpose, namely, teaching it to students. I also expected that the niche for a free production-quality UNIX system would be filled by either GNU or Berkeley UNIX shortly, so I wasn't really aiming at that.

    So why slam Linus? Because Linus did something he fundimentally disagrees with. The disagreement [fluidsignal.com] has been public and heated. Andy aludes to this today. But he also notes:

    Some of you may find it odd that I am defending Linus here. After all, he and I had a fairly public "debate" some years back. My primary concern here is getting trying to get the truth out and not blame everything on some teenage girl from the back hills of West Virginia. Also, Linus and I are not "enemies" or anything like that. I met him once and he seemed like a nice friendly, smart guy. My only regret is that he didn't develop Linux based on the microkernel technology of MINIX.

    So yes - he still feels the same way after all these years. He acknowledges this disagreement. And notes that it's nothing personal.

    Bitter of Linux's success? I don't see it.
  • by Laxitive ( 10360 ) on Thursday May 20, 2004 @05:05PM (#9208918) Journal
    Ok, I'm going to do some armchair analysis here. I think it _will_ come down to a X% difference in performance, in general. Consider what happens when a process in Linux makes a syscall. It triggers a user->kernel context switch. The kernel does some work, potentially blocking the user process, and eventually wakes up the process again and does a kernel->user context switch.

    In the message passing model, generally, the syscall ends up sending a message to some system server. Now that involves a user->kernel switch to trigger the message send, and a kernel->user switch for the message to be delivered. When a response is sent from the system server, the same hit is incurred again. So essentially, we have about twice the number of context switches going on.

    But userkernel stuff isn't really the major problem with message passing. It's kernelkernel interaction that used to be a simple function call.. but with microkernels suddenly become two message sends. That seems rather daunting at first, but there are many optimization methods to resolve it.

    For example, if two system servers are going to talk with each other extensively, they can do some initial handshaking to establish a shared memory area, and use that to communicate.. bypassing the kernel entirely. There will still be a latency hit with relation to that _particular_ interaction, but the overall message passing overhead will be reduced.

    Another idea is to used shared memory for uploading of, for example, write-protected code. It might be possible to have two processes share a chunk of memory, and in that memory, include code to access structures stored within that shared memory. If you design a standardized system for doing something like that, you might be able to bring it all down to the level of a function call again.

    There are lots of options. I don't know how feasible those options are - but they havn't really been tried to any great extent. There's nothing equivalent in the microkernel area to the extensively tested and tweaked freely developed Linux.

    About modularization, I disagree with you. The problems you are describing stem from over-engineering. Modular systems don't need to be hard to understand or work with. Their whole purpose is to partition off related code into islands, have them interact with well-defined interfaces, so that it's easier to reason about the whole by reasoning about the pieces. Badly engineered modular systems can be just as horrible as badly engineered monolithic systems. That's not really saying anything about either approach.

    -Laxitive
  • Re:Oh the irony. (Score:3, Interesting)

    by SubtleNuance ( 184325 ) on Thursday May 20, 2004 @05:21PM (#9209093) Journal
    Isaac Newton to fellow scientist Robert Hooke on 5th. February 1676; "If I have seen further it is by standing on the shoulders of giants".

    This wise adage is why Intellectual Property, if implemented in the visage of Capitalists, will lead us to another Dark Age, a Totalitarian State or both.....

  • by metamatic ( 202216 ) on Thursday May 20, 2004 @06:06PM (#9209460) Homepage Journal
    "We" weren't all running DOS, though.

    I was running Atari TOS. AST's microkernel design ran on my Atari ST and gave me a working Unix-like system. Linus's hack only ran on the 386, and was useless to me. Arguing that it was superior when it only ran on a machine I didn't have was pointless.
  • by Pseudonym ( 62607 ) on Thursday May 20, 2004 @09:17PM (#9210704)
    But there are also drawbacks (message passing is terribly hard to make secure in a multi-tasking context, and is frequently slower than dirt.

    While I take your point on security, to be fair, in a decent modern microkernel, message passing is a highly optimised operation. In fact, the kernel is usually optimised for precisely two tasks: scheduling and message passing. When that's pretty much all your kernel does, there are a few implementation tricks that you can use which avoids a lot of machinery that you'd otherwise find in your kernel.

    Moreover, a point which is not always appreciated is that monolithic kernels often have similar costs. You still have to copy data from user space to kernel space and back again. Compare this with a modern microkernel which copies directly from address space to address space. The microkernel still needs to swap tasks, but a) it's trivial to schedule, and b) only on the x86 is swapping tasks expensive, and sometimes not even then (e.g. using L4's small address space optimisation).

    IMO, microkernels and monolithic kernels are so different as to be almost incomparable.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...