Forgot your password?
Linux Software

Andy Tanenbaum on 'Who Wrote Linux' 668

Posted by michael
from the besides-the-tooth-fairy-of-course dept.
Andy Tanenbaum writes "Ken Brown has just released a book on open source code. In it, he claims (1) to have interviewed me, and (2) that Linus Torvalds didn't write Linux. I think Brown is batting .500, which is not bad for an amateur (for people other than Americans, Japanese, and Cubans, this is an obscure reference to baseball). Since I am one of the principals in this matter, I thought it might be useful for me to put my 2 eurocents' worth into the hopper. If you were weren't hacking much code in the 1980s, you might learn something." Tanenbaum's description of the interview process with Brown is classic. See also Slashdot's original story and Linus' reply.
This discussion has been archived. No new comments can be posted.

Andy Tanenbaum on 'Who Wrote Linux'

Comments Filter:
  • I like the last bit (Score:4, Interesting)

    by gowen (141411) <> on Thursday May 20, 2004 @09:56AM (#9203408) Homepage Journal
    where, ten years after he first had this argument, he still feels obliged to rag on Linux's design as a monolithic kernel as a bad design decision. This from a man who describes true multitasking and multi-threaded I/O as "a performance hack."

    Bitter much?
    • by MemoryDragon (544441) on Thursday May 20, 2004 @10:04AM (#9203486)
      Actually, since the interview is not reachable anymore, I assume it is right what you said. But Tannenbaum is right in his sense, a Makrokernel does not really make that much sense, it is easier to program because you simply have method calls instead of messages, but you run into driver compiles, crashes due to the strong binding etc... Mach at least in its early incarnations was not the best example of a Microkernel, neither is the vaporware hurd, which probably will be finished in about 100 years, but improved Mach kernels derived from newer incarnations already have shown how powerful and stable the concept can be, Two words:AIX and MacOSX both based on Mach kernels and both excellent and fast operating systems.
      • by Anonymous Coward on Thursday May 20, 2004 @10:07AM (#9203519)
        ...and to make them perform reasonably well, both of these have hacks (server collocation, etc.) that remove the whole reason for microkernels in the first place.

        Don't fall for the hype.

        On the other hand, QNX is actually pretty true to the concept.
        • by smitty_one_each (243267) on Thursday May 20, 2004 @05:44PM (#9209297) Homepage Journal
          I contend with the characterization of the last para as 'ragging'.

          The tone I got was of an affectionate tip o' the hat to what is surely one of the all-time classic flame wars.

          Besides, when you >make menuconfig, and you go through there and choose whether you want various bits compiled into the kernel or loaded as modules,

          isn't that an admission that the 'truth' on the modular/monolithic argument falls somewhere in the mote in the eye of the Tannenbaum/Torvalds Tempest?
      • by gowen (141411) <> on Thursday May 20, 2004 @10:12AM (#9203585) Homepage Journal
        Well, there are certainly benefits to a modular micro-kernel design. I wouldn't deny it (and haven't). But there are also drawbacks (message passing is terribly hard to make secure in a multi-tasking context, and is frequently slower than dirt. Add to that some of the braindead design decisions of the x386 w.r.t. privileged processes...) Yuck.

        Treating the micro v. monolithic debate as a solved problem ("microkernels win!") is as idiotic as suggesting that object orientation is the ideal solution to all programming problems.
        • by brunson (91995) * on Thursday May 20, 2004 @10:23AM (#9203681) Homepage

          I think he made such a big stink about it during the infamous flame war that, even if it was somehow proven that a macro-kernel is a better design, Tanenbaum could never back down from his premise without losing face.
          • by strobert (79836) on Thursday May 20, 2004 @03:46PM (#9207923) Homepage
            very true. also, Tanenbaum I think ignores the kernel module and abstration layers in the kernel. one of the points of a message passage system is to have proper interfaces defined so that subsystems can be replaced and interchanged. For people that have watched kernel development over the years, those design benefits are basically in the Linux kernel. yeah it may be "monolithic" in the OS kernel theory deisgn aspect, but it incorporates the design abstrations of a "microkernel" without the performance hits (for the most part on both counts).
        • by Tarantolato (760537) on Thursday May 20, 2004 @10:26AM (#9203707) Journal
          Treating the micro v. monolithic debate as a solved problem ("microkernels win!") is as idiotic as suggesting that object orientation is the ideal solution to all programming problems.

          Apparently, the really trendy kids have decided that microkernels themselves are obsolete, and moved on to something called exokernels []. I can't pretend to understand the distinctions involved.
          • by CustomDesigned (250089) on Thursday May 20, 2004 @10:50AM (#9203944) Homepage Journal
            Exokernels reinvent IBMs VM system. "An Exokernel securely multiplexes the raw hardware, and application libraries directly implement traditional OS functions." This does not mean that applications must now include their own drivers for every possible hardware they might use. It means that drivers can now be packaged as shared libraries in user space rather than as kernel modules.

            To summarize, let me call the part that securely multiplexes hardware the "kernel".

            • Monolithic makes drivers share address space with the "kernel".
            • With Microkernel, "kernel", drivers, filesystems, applications, etc each get their own address spaces.
            • Exokernel makes drivers share address space with applications. (Hopefully, filesystems get their own process and address space.)
            As you can see, as soon as you start partitioning applications into separate processes for security and robustness, the distinction between Exokernel and Microkernel becomes rather vague. The advantage of the Exokernel or VM approach is that you get the flexibility of keeping things like filesystems in a separate process for security and robustness, and things like video drivers in the same address space for performance. You might even have an X server as a separate process, but still allow full screen mode games that directly call the driver libraries for performance.

            IBM's VM was never that popular in its raw "Exokernel" mode with drivers in application space. However, it is still hugely popular as a way to run multiple Operating Systems as the "applications". Your mainframe can securely run multiple instances of S/390 Linux and traditional mainframe systems together.

          • by 13Echo (209846) on Thursday May 20, 2004 @01:21PM (#9205898) Homepage Journal
            I personally prefer nanokernels. A nanokernel is a (kernel * 10^-9). It works out as being much smaller and faster, since a microkernel is only (kernel * 10^-6). Yeah, yeah? Let's see if Andy Tanenbaum can explain that one! Soon, Linux hackers will be jumping ship because they will be 1337 enough to write a PICOKERNEL!
        • by buzzdecafe (583889) on Thursday May 20, 2004 @12:41PM (#9205342)
          Treating the micro v. monolithic debate as a solved problem ("microkernels win!") is as idiotic as suggesting that object orientation is the ideal solution to all programming problems.

          Tell that to Tanenbaum:

          From: (Andy Tanenbaum)
          Newsgroups: comp.os.minix
          Subject: LINUX is obsolete
          Date: 29 Jan 92 12:12:50 GMT

          ". . . While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won . . ."

          Cited from here []
        • by Nurf (11774) * on Thursday May 20, 2004 @02:08PM (#9206556) Homepage
          Treating the micro v. monolithic debate as a solved problem ("microkernels win!") is as idiotic as suggesting that object orientation is the ideal solution to all programming problems.

          I'll agree with that. However, I can say that for the stuff I do, microkernels win. I've written a microkernel RTOS for an embedded system, and it had the following advantages for me:

          1) It was easy to write. (Very modular)
          2) It is easy to maintain. (Very modular, and because all interaction is done with messaging, you dont worry about the code you are doing now interacting in some unknown way with something else. ie. You can ignore the rest of the system except for the messages you send to it)
          3) It is easy to give it strong deterministic real-time response. This is a big thing for me in the applications I use it for. Data ends up flowing from one task to another, and I just have to make sure my scheduler doesn't mess that up.
          4) The overhead introduced by message passing was negligible (The RTOS was implemented to replace an existing system, and did so comfortably)
          5) Its really easy to make stable and reliable systems, because everything is chopped up into small well understood sections with well defined interaction between sections.

          Microkernels might have slightly lower throughput than monolithic macrokernels, but I am not running a batch transaction processor.

          For desktop use, I want controlled latency and reliability. I don't feel that Linux gives me all that it should in those departments, though I use it because it is better than most (with some patching). Kernel modules feel like the worst of both worlds to me.

          So, given the priorities I stated above, I think I would prefer a microkernel OS, all other things being equal. I'd jump ship from Linux to one, if most other things are equal.

          Other people will have other priorities and I encourage them to use whatever works for them.
      • QNX (Score:5, Interesting)

        by RAMMS+EIN (578166) on Thursday May 20, 2004 @10:25AM (#9203698) Homepage Journal
        Add QNX to that. It doesn't get much more microkernel than that, and I think noone would argue that QNX is slow.

        As for Darwin; it was certainly slow on my x86 laptop, but it's not lacking any speed on my iBook. I guess that says something about the quality of the x86 port (hint: there is no such thing).

        Poor Andy seems a bit too stuck in his I am right and everyone who disagrees is wrong. I have a book here (Distributed Systems: Principles and Paradigms) in which he claims that a 20% performance loss is not so bad, in exchange for all the benefits a microkernel brings. I most sincerely think that is a ridiculous statement, but fortunately, it doesn't have to be that way. I believe microkernels need not incurr any significant performance penalty at all.
        • Re:QNX (Score:4, Insightful)

          by photon317 (208409) on Thursday May 20, 2004 @12:06PM (#9204856)
          A 20% performance hit really doesn't matter. Look at the rate of speed increases in hardware. When new systems come out doubling performance at such a regular pace, a one-time 20% slowdown to switch to an otherwise superior architecture with other benefits is an easy pill to swallow.

          Of course, I don't think microkernels are a superior archeticture to begin with, and I think that a bland "20%" isn't a reasonable estimate of the real-world application or database's performance loss, so I still disagree with Andy.

          I'm of the view that the differences between micro and monolithic kernels are really a question of where you place things in a semantic sense. Just as one can write OO code in C, one can write well-isolated modular code in a monolithic kernel. I'd rather have the burden of that modularization taken care of on the developer's end than at runtime.
          • Re:QNX (Score:5, Insightful)

            by Tony-A (29931) on Thursday May 20, 2004 @12:34PM (#9205243)
            A 20% performance hit really doesn't matter. Look at the rate of speed increases in hardware. When new systems come out doubling performance at such a regular pace, a one-time 20% slowdown to switch to an otherwise superior architecture with other benefits is an easy pill to swallow.

            Good theory. Practice seems to work out differently.

            Speed comparisons between products. Seems like 5% difference is enough to declare a clear winner. Unless you look behind the curtain.

            Speed increases in hardware? At work I have two computers. I am typing this on NT4 on a 400MHz Gateway. My "other" computer is XP on a 2.4GHz Dell. Other than some legacy dBase for DOS applications the "faster" computer isn't any faster. It does boot faster which means that the XP machine is booted a lot more often than the NT. A 20% performance hit would be the same 20% on both machines.

            The quoted improved performance doesn't quite translate into reality. A legitimate 10-times performance (IBM 1410 to IBM 370/135) transated into a 2-times difference in actual throughput. By the way, going the other direction won't work. As a rule of thumb, you will feel 90% of all slowdowns and only 10% of all speedups. This works both directions, like the "uphill both ways" quip.

        • Re:QNX (Score:5, Informative)

          by TheRaven64 (641858) on Thursday May 20, 2004 @12:47PM (#9205415) Journal
          As for Darwin; it was certainly slow on my x86 laptop, but it's not lacking any speed on my iBook. I guess that says something about the quality of the x86 port (hint: there is no such thing).

          The performance hit of doing a context switch on PowerPC is about the same as doing a function call, which make multithreading and things like microkernels very easy to do fast. On x86, the overhead is often an order of magnitude larger, making microkernels crawl.

      • OS X (Score:5, Interesting)

        by ink (4325) * on Thursday May 20, 2004 @10:32AM (#9203754) Homepage
        To call OS X a Mach system is a bit disengenious. All I/O operations are handled by the "BSD Subsystem" for performance reasons. This means that all file and network I/O (along with the file security descriptions) are in a "monolithic subsystem" of the uK. Needles to say, this is the most performance-intense section of a UNIX (any?) system. A lot of the message-passing is therefore avoided; and the performance costs that those message passes would incur. Take a look at this url: OS X System Overview [] See that dotted-line that stretches from the kernel to userland? Tannenbaum would not approve.
    • by kfg (145172) on Thursday May 20, 2004 @10:33AM (#9203762)
      Bitter much?

      Perhaps, just perhaps mind you, he is simply stating what, in his opinion, is true.

    • by Profane MuthaFucka (574406) <> on Thursday May 20, 2004 @11:53AM (#9204661) Homepage Journal
      Thirteen years ago, I didn't give a flying FUCK about how outdatated Linus' monolithic kernel was. Remember, we were all running DOS, and desperate to break out of real mode hell.

      Arguing about monolithic versus microkernel was like arguing about whether a starving man's meal should be vegetarian or not.
  • by madprof (4723) on Thursday May 20, 2004 @09:59AM (#9203442)
    Poor old Ken Brown must be wondering how wise it was to have made that particular trip now!
    Curious that someone would spend all that cash and yet have done so little research. Smells of hidden agendas, or no-so-hidden agendas perhaps?
    The best part has to be: "But the code was his. The proof of this is that he messed the design up." :-)

  • by byolinux (535260) on Thursday May 20, 2004 @10:01AM (#9203459) Journal
    Torvalds Vs Tanenbaum []. I've never used MINIX, but I believe the source code is out there somewhere, although AFAIK, it's not free software.

    I've often wondered what things will be like when Hurd is ready, and we'll have GNU and GNU/Linux, and all those BSDs, and OS X all in usage.

    And then we'll probably still have to worry about making stuff look right in IE 6, because Microsoft takes forever to update it.
  • by xyote (598794) on Thursday May 20, 2004 @10:02AM (#9203473)
    Andrew Tanenbaum discovers slashdot effect. Adti disputes it, citing that others discovered it first and that Tannenbaum just copied it.
  • Article text (Score:5, Informative)

    by Anonymous Coward on Thursday May 20, 2004 @10:04AM (#9203491)
    Some Notes on the "Who wrote Linux" Kerfuffle, Release 1.1

    The history of UNIX and its various children and grandchildren has been in the news recently as a result of a book from the Alexis de Tocqueville Institution []. Since I was involved in part of this history, I feel I have an obligation to set the record straight and correct some extremely serious errors. But first some background information.

    Ken Brown, President of the Alexis de Tocqueville Institution, contacted me in early March. He said he was writing a book on the history of UNIX and would like to interview me. Since I have written 15 books and have been involved in the history of UNIX in several ways, I said I was willing to help out. I have been interviewed by many people for many reasons over the years, and have been on Dutch and US TV and radio and in various newspapers and magazines, so I didn't think too much about it.

    Brown flew over to Amsterdam to interview me on 23 March 2004. Apparently I was the only reason for his coming to Europe. The interview got off to a shaky start, roughly paraphrased as follows:

    AST: "What's the Alexis de Tocqueville Institution?"
    KB: We do public policy work
    AST: A think tank, like the Rand Corporation?
    KB: Sort of
    AST: What does it do?
    KB: Issue reports and books
    AST: Who funds it?
    KB: We have multiple funding sources
    AST: Is SCO one of them? Is this about the SCO lawsuit?
    KB: We have multiple funding sources
    AST: Is Microsoft one of them?
    KB: We have multiple funding sources

    He was extremely evasive about why he was there and who was funding him. He just kept saying he was just writing a book about the history of UNIX. I asked him what he thought of Peter Salus' book, A Quarter Century of UNIX []. He'd never heard of it! I mean, if you are writing a book on the history of UNIX and flying 3000 miles to interview some guy about the subject, wouldn't it make sense to at least go to and type "history unix" in the search box, in which case Salus' book is the first hit? For $28 (and free shipping if you play your cards right) you could learn an awful lot about the material and not get any jet lag. As I sooned learned, Brown is not the sharpest knife in the drawer, but I was already suspicious. As a long-time author, I know it makes sense to at least be aware of what the competition is. He didn't bother.

    UNIX and Me

    I didn't think it odd that Brown would want to interview me about the history of UNIX. There are worse people to ask. In the late 1970s and early 1980s, I spent several summers in the UNIX group (Dept. 1127) at Bell Labs. I knew Ken Thompson, Dennis Ritchie, and the rest of the people involved in the development of UNIX. I have stayed at Rob Pike's house and Al Aho's house for extended periods of time. Dennis Ritchie, Steve Johnson, and Peter Weinberger, among others have stayed at my house in Amsterdam. Three of my Ph.D. students have worked in the UNIX group at Bell Labs and one of them is a permanent staff member now.

    Oddly enough, when I was at Bell Labs, my interest was not operating systems, although I had written one and published a paper about it (see "Software - Practice & Experience," vol. 2, pp. 109-119, 1973). My interest then was compilers, since I was the chief designer of the the Amsterdam Compiler Kit (see Commun. of the ACM, vol. 26, pp. 654-660, Sept. 1983.). I spent some time there disc

  • Start with a premise, do little or no research, and declare conclusions. When the truth is pointed out, get indignant.

    Granted, I haven't read the book in question, but this was a very enlightening article. I especially loved the comment that insinuates that Linus could have done a better job if he HAD stolen the code, than he did.

  • by argoff (142580) on Thursday May 20, 2004 @10:06AM (#9203514)
    In old world media, who creates something of value is more important than what gets created. Hence there is often alot of slander, lies, and outright fraud (and a lot of crapy media). In Hollywood, it's so bad it's pratically institutionalized.

    I think the enemies of Linux are trying a similar strategy based on the addage "if you kill the shepard - the sheep will scatter", "If you lie about something long enough or hard enough, people will believe it". They can't discredit Linux for technological or commercial reasons anymore, so their only option is to discredit Linus. With billions at stake, it could get nasty.
  • On Minux (Score:5, Informative)

    by Gumshoe (191490) on Thursday May 20, 2004 @10:10AM (#9203569) Journal
    On Minix:
    While [Minix is] not free software in the sense of "free beer" it was free software in the sense of "free speech" since all the source code was available for only slightly more than the manufacturing cost.
    That's not "free as in speech". IIRC, the licence prevented us from distributing changes to the OS in any form other than patch files. This was a major reason why people became interested in Linux -- no such restriction exists and it is therefore, truly "free as in speech".
  • Oh the irony. (Score:4, Interesting)

    by mumblestheclown (569987) on Thursday May 20, 2004 @10:12AM (#9203583)
    Of course Linus wrote linux.


    But who wrote the version of Basic that started bill gates on his path to riches?

    The answer of course is that every creative engineering endeavour builds upon what came before. the detractors will call the step that the developer in question took as derivative, obvious, insignificant, or larcenous. the supporters will shine light upon the principal's ability to fuse diverse, unfocused, and/or unapplied parts into a cohesive whole.

    to mis-quote grandpa simpson, 'the fax machine isn't anything more than a waffle iron with (something or other that i forgot).'

    so, the question is really this: those of you who accuse (probably correctly) whoever is claiming that linus didn't write linux of spreading FUD, have you ever written a similar post smearing gates on basic? pot kettle?

    • Re:Oh the irony. (Score:4, Insightful)

      by Dave_bsr (520621) <> on Thursday May 20, 2004 @11:47AM (#9204558) Homepage Journal
      Actually... didn't MS get its start writing BASIC compilers??

      I'm the first to disagree with anyone who says that Bill Gates isn't a very smart guy. He's got programming skills and incredible business smarts. However, he didn't write DOS, it was a clone of CP/M bought from another seattle company and then resold to IBM. story [].

      You are right of course, that each engineering marvel builds on something previously. If all the tech in the world were removed, we might know how to build cars and houses and put together computer systems, but we'd never know how to put together a hammer with stones and sticks, or fashion steel and silicone out of iron and sand. It would be interesting, wouldn't it? We'd have to start out with that old tech, to get back to this new stuff.

      Anyways. idiots complain about Microsoft all the time, stupidly. But smart people do too - and there are plenty of good arguments against MS that don't require ignorance. The debate will continue. Hopefully Linux will get better, and the computing experience in general will get better. We'll see.
  • The plot thickens (Score:5, Interesting)

    by DreamerFi (78710) <john AT sinteur DOT com> on Thursday May 20, 2004 @10:13AM (#9203597) Homepage
    Take a look at this post [] on alt.os.development:


    I'm conducting some research on behalf of the Alexis de Tocqueville
    Institution in Washington, DC. I'd like if someone could shed some
    light on the following questions:

    1. Describe the components of an operating system, besides the central
    component, the kernel.
    2. What do programmers usually develop first, the compiler or the
    3. Does this sequence impact the OS at all?
    4. What's more complicated, the kernel or the compiler?
    5. Why does operating system development take as long as it does? What
    are the three key things in operating system development that take the
    longest to perfect?
    6. Do you need operating systems familiarity to write a kernel? Yes /
    no? Elaborate please.
    7. In your opinion, why aren't there more operating systems on the

    Thanks for your time. Best,
    Justin Orndorff

    • by Dr. Smeegee (41653) * on Thursday May 20, 2004 @10:33AM (#9203766) Homepage Journal

      1. Describe the components of an operating system, besides the central component, the kernel.

      The Klaspil, the Frammistat and the Peramulator (sometimes called the "Virtual McGuggehupphe Valve). The Kaspil formats tuples for processing by the Frammistat, tuples are sorted, tagged and valued by the Perambulator.

      2. What do programmers usually develop first, the compiler or the kernel?

      Acne. Lots, usually.

      3. Does this sequence impact the OS at all?


      4. What's more complicated, the kernel or the compiler?


      5. Why does operating system development take as long as it does?

      Why is a duck?

      What are the three key things in operating system development that take the longest to perfect?

      Obsolecense, threading and nice icons.

      6. Do you need operating systems familiarity to write a kernel? Yes / no? Elaborate please.

      Yes. No.

      7. In your opinion, why aren't there more operating systems on the market?


    • by GeekDork (194851) on Thursday May 20, 2004 @10:47AM (#9203925)

      Great. In principle, this is the "please write an article so that I just have to put my name over it" strategy from the "The Way of the Weasel" Dilbert book.

    • Malicious intent (Score:5, Interesting)

      by mst76 (629405) on Thursday May 20, 2004 @10:55AM (#9203990)
      After reading this, the Tanenbaum interview and this [], there is little doubt that of Brown and the AdTI were determined in their slander campain against Linux from the start. From the AST interview, it is clear that he is just fishing for incriminating quotes. It is well known that initial Linux development took place on (and was inspired by) Minix. With selective quoting, it's likely that he will have AST seemingly accusing Linus of stealing Minix. One of his more persuasive arguments to the laymen will be that it took the highly experienced professor Tanenbaum years to develop Linux, while kid Linus hacked his OS together in 6 months. Of course, he knows this is not a truthful representation, but that doesn't matter as long as it will get him headlines. We (and AST it seems) may regard people like Brown and McBride as dumb and ignorant. But we should beware, these people are of a kind that we do not encounter often day-to-day: people with malicious intend.
    • by aug24 (38229) on Thursday May 20, 2004 @11:48AM (#9204584) Homepage

      Interesting...! I think I'll email PJ with this little lot!


    • by An Onerous Coward (222037) on Thursday May 20, 2004 @12:28PM (#9205144) Homepage
      A chance to be funny! Or possibly insightful. Either way, yay me!

      1. Describe the components of an operating system, besides the central
      component, the kernel.
      The components of the operating system are as follows: The file browser, the kernelized window manager, the web browser, the media player, and the gaping remote exploit. As can be seen by this feature list, Microsoft Windows is the only true operating system on the market today.

      2. What do programmers usually develop first, the compiler or the
      Neither can be developed without access to a text editor, so invariably this is always written first. Unfortunately, once it is written, it needs to be compiled, and the compiler itself needs an operating system to run on. This "chicken and egg" problem wasn't solved until 2097, with the invention of time travel.

      [Seriously, this guy is wrong to assume that both have to be written in order to have a complete system. Theoretically, you could develop an x86 operating system entirely on an Apple Powerbook, and just copy the binaries over, so you don't need to develop a compiler to develop an operating system.]

      3. Does this sequence impact the OS at all?
      Yes. Writing the compiler first opens a gaping hole in the fabric of the universe, while writing the kernel first causes a plague of sabre-toothed cows. The trick is to write them both at the same time so that the cows are immediately sucked into the gaping hole.

      4. What's more complicated, the kernel or the compiler?
      "Complicated" means "something I know how to do." "Simple" means "something I don't know how to do, but I know the people who do and they're a bunch of nitwits so how hard can it be?" Given that criteria, I would have to say that both are braindead simple. Ask me again in a couple of years.

      5. Why does operating system development take as long as it does? What
      are the three key things in operating system development that take the
      longest to perfect?
      There are three rules that apply here. The first is Hofstadter's Law []: It always takes longer than you think, even if you've accounted for Hofstadter's Law.

      The second rule is the 90% rule: The first 90% of the project will take 90% of the time, and the last 10% will take the other 90% of the time.

      The last rule is called the "There's no way in hell we can add all the features the marketing department has already promised our customers, and they just added twenty more, and by the way three of them violate laws of physics" rule. Unfortunately, only the name of the rule has been passed down over the years, so nobody remembers what it was about.

      6. Do you need operating systems familiarity to write a kernel? Yes /
      no? Elaborate please.
      A basic familiarity with computers is helpful, but not strictly necessary. For example, when Dennis Ritchie wrote the compiler for the BCPL language, he didn't actually use a computer. He scrawled the whole thing on a ream of paper, and had his secretary transcribe it. Similarly, when Linus wrote the 0.1 kernel, he used a photocopier.

      7. In your opinion, why aren't there more operating systems on the
      Because it is not in the interests of the Freemasons to have more operating systems on the market. I can't say anything more about that in this forum, but it's absolutely true.

      [Geez. There's a difference between an "operating system" (which a decent grad student can whip out in a few months) and an operating system which can be marketed as competition to the OSes already on the market. Linux 1.0 was probably closer to the former than the latter.]
    • by Anonymous Coward on Thursday May 20, 2004 @12:47PM (#9205417)
      What do we know about Justin Orndorff

      1. On April 12, 2004, he asked about Linux ownership on usenet linux.kernel, using IP address (RCN dialup access range). He used as his originating email address.
      References : 1%4

      2. On April 28, 2004, he asked about obtaining older versions of Minix in a Minix related mailing-list. He used raison__d_etre@HOTMAIL.COM as his originating email address.
      References : 040 4d&L=minix-l&F=&S=&P=59

      3. On May 6, 2004, he posted questions about O.S. development on usenet alt.os.development, using IP address (again, RCN dialup access range). He used as his originating email address.
      References : 050

      4. Later on May 6, 2004, the very same questions were asked in various web forums by someone using the nickname "jnana".
      References : 1 19&goto=nextoldest ?message=6907

      5. On may 18th, 2004, he asked questions about corporate contribution into Linux on usenet linux.samba, using IP address (Verizon DSL access range). He used as his originating email address.
      References : 11%

      It is very likely that it is indeed only one person as :
      - The topics are closely related.
      - The IP used when posting using the address (3) is on the same range as the one used when posting using the address (1).
      - The questions asked using the jnana nickname (4) are the same as the ones asked using the address (3).

      Obviously, we have someone here, going by the names of Justin Orndorff/Jnana/Raison__d_etre (French for "reason to exist, or reason to be), and who seems very interested in Linux and other FOSS intelectual properties issues.

      This person has at least two email addresses :

      What else ?

      6. He apparently has a page/blog on Devianart. He uses the same Justin O. and Jnana names. He says he's 22 and lives in CP (?), Maryland. He has apparently started a new job on March 1st.
      References :

      7. He seems to like movies, particularly the "poetic" genre.
      References : gui des/-/A17FKWGJYHSEL2/103-1166439-6115864

      8. He likes VHS music tapes trading (particularly black metal), has an email address (University of Maryland) where he seems to be a student.
      References : ght .asp?ArtistID=3338
      http://www.tapetradernetwork.c om/Framed_Trader_Det ail.asp?ID=9856

      9. A while back he was looking for "Codreanu comp. ['Fidelis Legio']" (whatever that is, apparently more black metal).
      References : 02- February/010935.html

      10. Well, he actually seems to be an English student at U. of Maryland (College Park, CP again, see (6)), and interested in films production.
      References :

      OK, so maybe this is going a little fast but we apparently have an :
      - English student in Maryland,
      - Interested in films
      - Using email addresses, and maybe (fake ?)
      - Working since March or April for the ADTI (A. de Tocqueville Institute)
  • Fighting features (Score:5, Insightful)

    by amightywind (691887) on Thursday May 20, 2004 @10:14AM (#9203605) Journal

    ..As I did 20 years ago, I still fervently believe that the only way to make software secure, reliable, and fast is to make it small. Fight Features.

    Credit Mr. Tanenbaum sticking to his guns on the micro kernel design. But the brilliance of Linus is that he realises you must first have features to fight!

    • by Tin Foil Hat (705308) on Thursday May 20, 2004 @10:47AM (#9203927)
      FWIW, I think he's probably right from a technical standpoint. In practice, however, the macrokernel has been far easier to work with. Witness the huge success that macrokernel designs have had over microkernel designs. Even Apple, whose original macs were microkernels, eventually switched over to a macrokernel because of the difficulties of updating the original one.

  • by Phekko (619272) on Thursday May 20, 2004 @10:14AM (#9203606)
    SCO wrote it. From scratch. Now cough up that $699!
  • Obligatory mirror (Score:4, Informative)

    by TaxSlave (23295) <> on Thursday May 20, 2004 @10:20AM (#9203648) Homepage Journal

    Here's a mirror [] of the article while it lasts.

  • In conclusion .. (Score:5, Insightful)

    by Macka (9388) on Thursday May 20, 2004 @10:23AM (#9203680)
    My conclusion is the Ken Brown doesn't have a clue what he is talking about. I also have grave questions about his methodology. After he talked to me, he prowled the university halls buttonholing random students and asking them questions. Not exactly primary sources

    What more needs to be said !

  • by jmitchel! (254506) on Thursday May 20, 2004 @10:30AM (#9203739)
    AST lists several independently developed systems of equivalent complexity to Mixix 1.0 / System 7. Here are a couple more I found:

    OMU (6809 processor, ported to 68000) roughly system 7 but only single user, integrated shell.

    UZI (Z80 processor, ported to 180, 280) roughly system 7: multitasking
  • Really A Secret ? (Score:5, Insightful)

    by polyp2000 (444682) on Thursday May 20, 2004 @10:33AM (#9203769) Homepage Journal
    I dont think Linus or anyone else has tried to conceal or hide the origins of linux. Anyone who has taken more than a passing interest in the history of linux knows that Linus got interested in kernel development while hacking Minix which was supplied in source code form in an educational book called "Operating Systems: Their design and Implementation". Rebel Code (Glynn Moody) is an excellent history of linux and open source and a great read. If people are interested in getting a good background its a great place to start.

    I think its fair to say that "shock horror!" Andy Tanenbaum probably "learned" how to write Minix from somewhere else. In any case in the initial phases of Linux I think its fair to say that Linus did 99% of the work. And after the seed was planted.. well lots of people are now involoved with writing linux.

    Its the nature of the beast almost all human acheivements are adaptations of something that came before . Its called development and its incredibly difficult to come up with an idea that doesnt have its basis in something else.

    I challenge anyone to try and come up with an idea that doesnt have its origins in something else.

  • by mojoNYC (595906) on Thursday May 20, 2004 @10:39AM (#9203839) Homepage
    my first reaction to this attack was, 'who the f### is Ken Brown? as they taught me back in school, 'always consider the source' --if this guy's attacking Linux, he'd better have some solid credentials in the computer industry, right? well, all it took was a Google search, and the first link I hit told me all i need to know: Anti-Open Source lobbyists need love, too Friday October 25, 2002 - [ 03:00 PM GMT ] Topics: Migration - By Robin "Roblimo" Miller - I felt bad for Ken Brown of the Alexis De Toqueville Institution (AdTI) last week... shtml?tid=19 thanks to Roblimo, we have a first-hand account of Ken Brown's shameless FUD-ing, back in 2002--read this link for a cuttingly funny look at Mr. Browns earlier efforts;> you gotta love the 'thousands of eyeballs' that are working on our side--it more than offsets what M$ gets from spreading it's dirty money around... -DWitt ps. methinks Brown's IP has just gone down the tubes--thank you very much Andy Tanenbaum!
  • by Dark Paladin (116525) * <jhummel&johnhummel,net> on Thursday May 20, 2004 @10:40AM (#9203866) Homepage
    Whatever anyone does - do not read Brown's book when it comes out.

    Thanks to Mr. Tanenbaum, we have the proof here:

    People can create operating systems on their own. Even UNIX-like operating systems. Linus learned from Mr. Tanenbaum. Linus wrote the first kernel, published it and asked for input, which the rest of the world provided.

    Linus then acted as a proper project manager, and the rest is history.

    So again, whatever people do - do not buy the book.

    Now, here's the problem: if we talk about this upcoming book, people will want to buy it. It's the Gibson Effect - the more its denounced, the more people will want to read it, and next thing you know there will be lines of people at the bookstores claiming they can see Jesus's face on this book.

    So instead, I recommend to all intelligent folks in the programming community: ignore it. From here on out, don't even refer to the book by name, or its foundation, or the author. The more we pretend it doesn't exist and it's not important, the less interest people will have in it. If someone asks (such as a Pointy Haired Boss guy), shrug and lie as you say "No idea. I heard it was some book, but that it wasn't that good." And then shut up and leave it at that.

    Don't give these guys free advertising. Don't even give them an ounce of respect, they don't deserve it.
    • by Fnkmaster (89084) on Thursday May 20, 2004 @11:21AM (#9204271)
      From what I've observed there are lots of programming projects that are "hard" for companies and large organizations because they aren't so amenable to "early partition" as people would like, and yet are substantially easier for one developer (or two in some cases) who partition the problem using internal divide-and-conquer, thus completely understanding the partitioning scheme right off the bat rather than spending months hashing it out at meetings and miscued early development efforts.

      I had an experience where an entire development team of twenty people spent about a year and a half writing a large, grandiose enterprise software system that was supposed to be general-purpose and flexible, but in the end was a real performance turd at the job it was supposed to do. Using what I knew about the actual problem from looking at the previous solution's mistakes and the original problem statement, I rewrote the system from scratch over about 8 weeks at a client's site, averaging about 300 SLOCs a day (coding in Java where I can fly), with one developer helping me on a few specific tasks, and we ended up with a system that was functionally equivalent and about 50 times faster than the previous version because it stripped out all the unnecessary modularity (modularity is good, but if you split something that should be one component into ten, you just get lots of extra overhead) and message-passing that gunked up the original design.

      I can't help but think of the analogy between this project and the Linux/MINIX effort. My knowledge of the problem was informed by analyzing the earlier design, but not a line of code was actually derived from it. And the twenty-some-odd man year effort was replaced neatly by a 3-man-month effort that was superior.

      The moral of the story is that any of us who've been around in the software world long enough will tell you that most any system, assuming you lift all the crazy featurization constraints, can be written fairly rapidly by one person. And that usually you'll get a better result with an early working product and iterative functionality development than you will with a monolithic development effort, assuming you know the architectural parameters going into the effort by having been able to analyze previous efforts at solving a similar problem.

      So the point is... keep the assumptions in mind before you start estimating a project's size and scope, man-hour requirements and so on. Development of UNIX-like OSes was a well-defined, well-understood problem at the time Linus did his work. And don't go claiming that somebody accomplished an impossible task unless you have a REALLY good understanding of the software engineering process in general, and the particulars of the problem they solved - in this case Ken Brown has neither. We didn't really need Mr. Tanenbaum to tell us that, but it shows what a stand-up guy he is that he has made a clear effort to defend Linus despite any past arguments they may have had.

  • by Zocalo (252965) on Thursday May 20, 2004 @10:50AM (#9203951) Homepage
    Matt Loney of ZDNet UK is covering the story, including Andy Tanenbaum's two Euro-cents here []. I don't think anyone at AdTI, least of all Ken Brown, is going to be living off royalties any time soon - "falls at the starting gate" indeed. ZD even mention AdTI's ties to Microsoft least there be any doubt, which is nice of them. :)
  • by Progman3K (515744) on Thursday May 20, 2004 @10:53AM (#9203977)
    Thanks for setting the record straight.
    And especially for being brave enough to address the problem of political repression and scapegoating, if albeit a little obliquely.
  • by Junks Jerzey (54586) on Thursday May 20, 2004 @10:59AM (#9204028)
    I know many in the Linux community like to paint Mr. Tanenbaum as a bitter lunatic, but this is a great article, one that every Linux user/zealot should read.

    First, he goes into the history of why people were souring on UNIX and the various independently-written UNIXalikes. These were mostly individual projects, which really sets the record straight for the people who seem to think that Linus was the first person to do this, and that Linus was somehow the only person intelligent and manly enough to write his own kernel.

    At the same time, he lays out the history of UNIX clones, of which Linux was definitely one. It's surprising to me how many people seem to think of Linux as a great, independent OS, and fight so hard to deny that it has roots in UNIX. Of course these people are mostly young and don't know much about computer history. In that respect, this is an educational article.

    And, yes, he does talk about the micro vs. monolithic kernel issue, but he does so without fanaticism, and, you know, what he says is generally correct. He's all for small and reliable software, which is something that UNIX was originally but rapidly became the antithesis of. Performance issues, back when people were using 4.77 and 8 MHz desktop processors, well, let's just say that things were different then. Now you have people writing big applications in Python. The real reason Linux ended up with a monolithic kernel is because that's what Linus understood and it was easier for him to write that way.
  • by Derek (1525) on Thursday May 20, 2004 @11:00AM (#9204034) Journal
    What does this quote reference?

    "Some of you may find it odd that I am defending Linus here. After all, he and I had a fairly public "debate" some years back. My primary concern here is getting trying to get the truth out and not blame everything on some teenage girl from the back hills of West Virginia."

    Just curious...
  • Summary: (Score:4, Insightful)

    by nanojath (265940) on Thursday May 20, 2004 @11:13AM (#9204167) Homepage Journal
    From the AdTI website: " AdTI's Kenneth Brown reviews the origins and development of Linux -- in light of repeated expressions of contempt for intellectual property rights by Torvalds and some (but by no means all) open source programmers." (emphasis added).

    I don't have to read the book -particularly after reading Tanenbaum's very convincing presentation that the author doesn't know dick about intellectual property, not to mention he was basically lying about writing a book about the history of Unix, when it is clear his notion from the get go was to write a book titled "SCO is telling the truth about Linux - Really!" The quote above makes it clear that rather than bother to, oh, find some actual copyrighted code in Linux that is stolen, he is arguing the stellar logic that:
    1) Open source advocates are contemptuous of copyright law
    2) People who are contemptuous of copyright law are plagiarists
    3) The people who wrote Linux are open source advocates...

    Remind me what de Toqueville said about weaselly corporate shills again?

    A quote from the Yahoo article -

    "The report," according to Gregory Fossedal, a Tocqueville senior fellow, "raises important questions that all developers and users of open source code must face. While you cannot group all open source programmers and programs together; many are rigorous and respectful of the intellectual property rights, while others speak of intellectual property rights with open contempt." -Emphasis, again, added.

    Get it? Failing evidence for any credible claim that they are actually TREATING intellectual property rights with contempt, they will note that they "speak" of them with open contempt... as if there were something wrong with that.

  • by Anonymous Coward on Thursday May 20, 2004 @11:25AM (#9204313)
    Writing a simple OS kernel is easy. I wrote one, and believe me, it wasn't that hard. At the university where I am a grad student, we require the undergrads to write most of an OS kernel (including virtual memory and a filesystem) in a one-semester course. I read alt.os.development regularly, and there are high school students who are writing OS kernels. (I'm often depressed by the fact that they are much better programmers than I am :-)

    Writing a scalable, production-quality OS kernel is another matter entirely. That takes hundreds or thousands of person-years by talented programmers.

    Ken Brown is obviously a complete shithead if he doesn't understand this distinction. AST's rebuttal made the facts of the matter abundantly clear, and I'm sure any competent OS developer he asked would have told him the same thing.
  • by maroberts (15852) on Thursday May 20, 2004 @11:38AM (#9204464) Homepage Journal
    Quote from Tannenbaum:
    That's when I discovered that (1) he had never heard of the patent, (2) did not know what it meant to dedicate a patent (i.e., put it in the public domain), and (3) really did not know a thing about intellectual property law. He was confused about patents, copyrights, and trademarks.

    Do you not find it strange that the President of an organisation involved in arguments about patents, copyrights and trademarks should be so ignorant of patents, copyrights and trademarks?
  • Mirror (Score:5, Informative)

    by Door-opening Fascist (534466) <> on Thursday May 20, 2004 @11:54AM (#9204671) Homepage
    Looks like the server is bogged down. Here's a couple mirrors:

    Mirror #1 []

    Mirror #2 []

  • Tanenbaum ROCKS!!! (Score:4, Insightful)

    by ToasterTester (95180) on Thursday May 20, 2004 @12:40PM (#9205318)
    I think Andrew lays things out quite nicely in this article. So hopefully Brown will crawl back under the rock he came from.

    Also Tanenbaum made one of my favorite comments, something like this...

    The wonderful thing about standards, is there are so many to choose from.

    Tanenbaum is one the best minds in Computer Science.
  • by belmolis (702863) <> on Thursday May 20, 2004 @02:47PM (#9207120) Homepage

    The nonsense coming out out of AdTI together with Andrew Tanenbaum's description of his interview make me wonder whether the speculation that Microsoft is behind this is really correct. Microsoft has tons of money and some fairly smart people, even in management. I find it hard to believe that they couldn't do a better job than this. Even if they need a putatively independent institution as a front, they could write the material themselves. They could even have their chosen institution hire somebody halfway competant for the project. It's hard to believe that they couldn't do better than this.

    I wonder if this is perhaps just somebody trying to make a name for himself and/or bring in money for himself or his institute rather than something directly arranged by Microsoft.

  • by andy-at-vu (781800) on Friday May 21, 2004 @02:11PM (#9218261)
    After seeing all the responses yesterday, I think I have a better idea of Ken Brown's motivation in coming to see me and also his motivation in writing the book. If you are curious, take a look at

    Andy Tanenbaum

"The greatest warriors are the ones who fight for peace." -- Holly Near