Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Education Open Source Operating Systems Unix Linux

Prof. Andy Tanenbaum Retires From Vrije University 136

When Linus Torvalds first announced his new operating system project ("just a hobby, won't be big and professional like gnu"), he aimed the announcement at users of Minix for a good reason: Minix (you can download the latest from the Minix home page) was the kind of OS that tinkerers could afford to look at, and it was intended as an educational tool. Minix's creator, Professor Andrew Stuart "Andy" Tanenbaum, described his academic-oriented microkernel OS as a hobby, too, in the now-famous online discussion with Linus and others. New submitter Thijssss (655388) writes with word that Tanenbaum, whose educational endeavors led indirectly to the birth of Linux, is finally retiring. "He has been at the Vrije Universiteit for 43 years, but everything must eventually end."
This discussion has been archived. No new comments can be posted.

Prof. Andy Tanenbaum Retires From Vrije University

Comments Filter:
  • by halivar ( 535827 ) <`moc.liamg' `ta' `reglefb'> on Thursday July 10, 2014 @10:55AM (#47424247)

    "Microkernels are still better, you little punk!" With an engraving of a shaking fist.

    • by MightyMartian ( 840721 ) on Thursday July 10, 2014 @11:17AM (#47424411) Journal

      I really miss the good old days when technical debates were over the merits and faults of such simple things as different kinds of kernels, and not about whether or not every single thing you do online is being stacked into half a dozen nation's permanent data storage facilities.

      The Linus vs. Tanenbaum dustup is from a simpler, more positive age.

    • by itzly ( 3699663 )
      We can still hope he'll come to his senses before that.
    • by Z00L00K ( 682162 )

      Both architectures have their merits. I wouldn't say that one is better than the other.

      At least Minix has come a long way since the late 80's where any crash after an uptime of more than 2 hours could be attributed to the OS instead of the application.

    • When Torvalds said that "microkernels are like masturbation. It feels good but you dont get anything done" . I wonder if he was right . The folks at HURD are still kinda .. shagging with their kernel.
      • by jbolden ( 176878 )

        I'd say that HURD is just a failed product and not read too much into it. QNX has been out for a long time with a terrific microkernel that offers real advantages. Its about to become too dated as their isn't going to be the funding to make the move to 64 bit but... Moreover arguably the virtualization OSes are essentially microkernels where a virtualization system acts as a mini kernel, multiple non-monolythic kernels operate on top of them and then branch out. Virtualization hasn't exactly been a fai

        • Virtualization OSs are not microkernels. Microkernels require the bulk of everything to go into userland, such as device drivers. In Hypervisors, OTOH, device drivers and a whole lot of other common resources that OSs use are packed in the kernel.
          • by jbolden ( 176878 )

            I understand. But they are analogous. In a Hypervisor the real kernel doing most of the work is out in userland while there is a tiny "kernel" which runs part. Moreover the device drivers are split.

            Ap X uses device Y
            there is no device driver involved X uses a virtual device driver. The virtual device driver then talks to a real device. Which is pretty close to the Microkernel design.

  • "Vrije University"? (Score:4, Informative)

    by RGuns ( 904742 ) on Thursday July 10, 2014 @11:02AM (#47424277)
    "Vrije" is a Dutch adjective, meaning "free". So either you write "vrije Universiteit", or you write "Free University". "Vrije University" is just silly.
  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Thursday July 10, 2014 @11:02AM (#47424285)

    "A multithreaded file system is only a performance hack. When there is only one job active, the normal case on a small PC, it buys you nothing and adds complexity to the code. On machines fast enough to support multiple users, you probably have enough buffer cache to insure a hit cache hit rate, in which case multithreading also buys you nothing." - Andy Tanenbaum on the "LINUX is obsolete" Thread from 30 Jan '92

    Nice to see a so called "expert" so far off. Seriously, not the first CS Professor to be completely backwards. I've met a few of those too. :-)

    • by Anonymous Coward
      What do you mean? Seems like a sensible comment for the time.
      • by Alioth ( 221270 )

        It wasn't a sensible comment for anyone who could see the writing on the wall. If you weren't in academentia, you could see that it wouldn't be long before personal computers would have more than one process active at a time.

        • Re: (Score:1, Interesting)

          by Anonymous Coward
          Even today there is usually one process grabbing most of the CPU time, and even worse, it's additionally fiddling thumbs for long periods as it waits for the slow mechanical HDD. Provided with a nice large buffer, in most scenarios Andy's single-threaded file system access would still serve single-user desktop machines quite well.
          • Even today there is usually one process grabbing most of the CPU time

            Yeah, the antivirus.

            in most scenarios Andy's single-threaded file system access would still serve single-user desktop machines quite well.

            Is a single-threaded file system still practical on multi-spindle PCs? These include machines with a boot SSD and a data HDD, or a boot HDD and an optical drive, or a boot HDD and an external USB SSD used for sneakernetting files too big for the available Internet connection. And by "desktop" did you mean to exclude laptops?

        • by Anonymous Coward

          It wasn't a sensible comment for anyone who could see the writing on the wall. If you weren't in academentia, you could see that it wouldn't be long before personal computers would have more than one process active at a time.

          (Even at the time, it wasn't exactly the future. The Amiga was minor marketshare, but not obscure.)

          I think it's irrelevant, though. I still (in 2014!) think of filesystems as being primarily I/O bound. For the most part, filesystems still just wait for things and don't take a lot of

      • by jedidiah ( 1196 )

        No. It was not a "sensible" comment for the time. Anyone with a lick of sense could see where the tech was going and could easily realize that you had to plan for the future.

        PCs of the time were stuck in the kind of situation that Tannenbaum described not because of any inherent technical limitation but because Microsoft was a lame monopolistic sandbagger holding back the entire industry.

        Even in 1992 there wasn't that much of a gap between the capabilities of proprietary Unix hardware and PCs. Some Unix mac

        • At the time, the closest the DOS world had to multitasking was TSRs. Beside my first PC was my CoCo 3 with OS/9 level 2 with 512k of RAM with a true preemptive multitasking kernel running on an 8 but 6809 CPU. Microsoft's dominance at the time meant in many ways the most common 16 bit opposing system in the world was only marginally better than a CPM machine from 1980.

      • by Anonymous Coward

        It was ridiculous. Personal computers have these things called interrupts. They allow the system to do what appears to be many things at once. Waiting for your tape to write shouldn't lock the keyboard, likewise with updating the screen. This is going back into the 1970s, something a professor should have understood. But alas, these people are stuck in their ancient thoughts even back then. Most of my CS hardware lectures were based around paper-tape and punched cards, magnetic media was currently very comm

    • by Megol ( 3135005 ) on Thursday July 10, 2014 @11:47AM (#47424621)

      In 1992, on a small PC this is true. That doesn't mean it was true for a multi-user design. Remember that for a "small PC" the disk interface was ATA without (usable) DMA support and no of the later features to lower CPU overhead - this means that a good buffer cache implementation would provide better performance not only in throughput and latency of disk accesses but also lower wasted CPU cycles proportional to the size of the cache.

      But using ATA also means that the performance of multi-threaded filesystems are unlikely to be much faster than a single threaded one, this because the bottleneck stays the same and also loads the CPU.

      A bigger PC intended as workstations (yes, there were some) would probably use SCSI instead of ATA and then the situation is a bit different. But the original quote is still true IMHO.

      • As is so often the case, context is quite relevant. His statement is ridiculous today, but when it was made, quite reasonable.

    • by LWATCDR ( 28044 )

      Except he was right in 1992.
      He just underestimated the growth in speed and power of a PC. On a 386 with 4 megs of memory and a single slow hard drive he was right.

      • by itzly ( 3699663 )
        No, in 1992 we still used floppies. Also, since in Unix everything is a file, the file system also comes into play when accessing device nodes. Without multithreading, the harddisk access could be stuck in the queue behind a floppy or terminal access. This is simply unacceptable.
        • Who are you calling we, kemosabe?

          I had a 105 meg external drive by 1992 (I think it might have been easily 1-2 years before that). Actually, I had an 80 meg external before that (put into a case).. the 105 was a prebuilt "external" drive.

      • He might as well have said '640k should be enough for anyone.'
    • In reality, both were correct, in their own way. You have to remember the hardware they were using in 1992. Its NOT what you have today. Not by a long shot.

  • I don't need no stinkin' OS.

    • Overrated? You do know an excellent karma gives your posts a score of two by default, mr. moderator?

      If anything, it should have been modded offtopic.

  • by sg_oneill ( 159032 ) on Thursday July 10, 2014 @11:06AM (#47424321)

    A lot of people have the wrong impression about the good professor after the infamous exchange, but they miss that this is what academics do, and despite the flameyness of the exchange, Linus and Tanenbaum had a great deal of respect for each other. After all Linus was, for all purposes, Tanenbaums greatest student. I remember borrowing his book from UWA and getting the disks from the UWA computer club, following the instructions to get a functional minix up, then following his book to write a driver for my highly bugshit WANG (yes that was the brand name lol) hard drive controller. I learned more from that about how computers *really* work, than almost any thing I've ever learned. The difficulty of his book was notorious, probably the only books I found harder was Walter Pistons music theory book "Harmony", and Deleuzes philosophy text "Capitalism and Schizophrenia". And like those books, in its field Tanenbaums work shook the foundations of academia.

    Enjoy your retirement old man, you deserved it.

    • by MightyMartian ( 840721 ) on Thursday July 10, 2014 @11:22AM (#47424441) Journal

      Minix was really the first of its kind; a Unix-like OS that you could run on cheap (relatively speaking at the time) commodity hardware and that you could get the source code for. A lot of the computing we take for granted now comes from Tanenbaum's work.

      My first Minix install was on a 386-SX with a whopping 4mb of RAM I borrowed from work back in the early 1990s. I quickly abandoned Minix for Linux once it came out, but for several years I had Minix running on an old 386 laptop just for fun.

      • by sg_oneill ( 159032 ) on Thursday July 10, 2014 @11:55AM (#47424685)

        Part of the reason I used Minix was I had an old second hand 286. because I couldn't afford one of the new-fangled 386s. Computers where bloody expensive back then! At the time I had started using a local BBS called "Omen" which had just gotten a brand spanking new ISDN connection to this new thing called "ARPAnet" (aka "Australian research something something net") , aka the australian wing of the internet, and it had two amazing features 1) IRC, 2) Usenet (There was also Gopher but eh..... Usenet was better indexed and also had hilarious flame wars). Anyway it struck me that if I had a unix I could get a SLIP connection to the internet and run IRC *and* Usenet simultaneously using the magical wonder of multitasking. Omen was using Linux (very very brand new) but since I didnt have a 386 I couldnt use it. So I grabbed Minix, since I couldnt afford Xenix or SCO Unix (Pre SCO getting brought out by Caldera and then turning cthulhu it was a great company).

        Problem is Minix didnt have a network stack :(

        • by Anonymous Coward

          ... this new thing called "ARPAnet" (aka "Australian research something something net") , aka the australian wing of the internet ...

          ARPANET [wikipedia.org] is the Advanced Research Projects Agency Network.

      • by McGruber ( 1417641 ) on Thursday July 10, 2014 @01:03PM (#47425167)

        Minix was really the first of its kind; a Unix-like OS that you could run on cheap (relatively speaking at the time) commodity hardware and that you could get the source code for. A lot of the computing we take for granted now comes from Tanenbaum's work.

        Truly!

        I first learned of Minix by reading about it in Byte magazine. At the time, I was an undergrad at a big US university, a member of the Association of American Universities [aau.edu]. The only multitasking computers on the entire campus were a Unix mainframe, a VAX, and a cluster (lab) of Sun workstations that only graduate engineering students could have accounts on. The Unix and VAX machines could be accessed using VT-100 (and later) terminals in computer labs spread out all over the campus. There were also BYOF (Bring Your Own Floppies) computer labs filled with DOS (pre-windows) PCs, and a few labs filled with early Macs, but those labs were mostly used by humanities majors hunting-and-pecking their term papers out.

        Booting a multitasking unix-like OS on a personal computer was a huge deal back then.

    • His books educated more than one generation of programmers and computer scientists. I can remember the decision, while I was a grad student, to part with a sizeable fraction of my net worth to buy my first Tanenbaum textbook. No regrets.
  • What I remember (Score:2, Interesting)

    by Anonymous Coward

    More than Minix, I remember Tanenbaum for his "Computer Networks" textbook. Especially this:

    "The nice thing about standards is that you have so many to choose from; furthermore, if you do not like any of them, you can just wait for next year's model."

    • What I remember (Score:5, Interesting)

      by Kohenkatz ( 1166461 ) on Thursday July 10, 2014 @11:52AM (#47424655) Journal

      I'm sorry, but the best quote from that book is actually this one:

      Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.

      In my networks class, we extended the calculation to a 747 full of DVDs (the best we could do at the time). Maybe one of these days, if I have a minute, I'll go back and do an A380 full of flash drives.

      • by swb ( 14022 )

        Maybe an An-225 full of a 6 TB hard disks?

        • ... 6 TB hard disks?

          6 TB hard disks are going to be significantly less efficient. 6TB of 64GB MicroSD cards would be 96 cards which will take up 7.92 cm^3. A 6TB hard drive is huge by comparison, close to 400 cm^3 (though the actual number varies by drive manufacturer).

      • by Anonymous Coward

        http://www.dansdata.com/gz105.htm

        • OK. My curiosity got the better of me, so here it is.

          First, using his number of 82.5 cubic millimeters for the volume of a Micro SD card, and Wikipedia's 1,134 cubic meters for the cargo volume of an A380 (in freight configuration), I get 13745454545 cards. Using his 20% density reduction, I'll bring that down to 10996363636. 128GB MicroSD cards exist, but they aren't mainstream yet, so let's go with 64GB. The total data capacity of the plane is therefore 610.4 EiB (exbibytes), which Wolfram Alpha help

      • by tuxicle ( 996538 )
        Yes, but the TSA will want to inspect each of those flash drives, so the effective bandwidth would be quite low. At least in the US. Maybe that's why we have such atrocious broadband connectivity.
  • by De Lemming ( 227104 ) on Thursday July 10, 2014 @11:16AM (#47424395) Homepage

    "Vrije University" in the title sounds realy strange to me, as a native Dutch speaker. Vrije isn't a city, "Vrije Universiteit" means "Free University," which indicates it's not linked to e.g. the Catholic church. Just FYI.

    • by Anonymous Coward

      It's doesn't indicate it's not linked to the Catholic Church but that it's not linked to government.

      The Vrije Universiteit was/is the university of a the Gereformeerde kerk (reformed church) witch is a church that split of of the Dutch Reformed Church. They Dutch Reformed church was a semi-controlled by the government and to free/liberal for the tastes of the Gereformeerden.

    • by Rashdot ( 845549 )

      The word 'Free" in its name is a bit misleading. The university was founded by "a group of orthodox-Protestant Christians" (English Wikipedia page):

      http://en.wikipedia.org/wiki/VU_University_Amsterdam [wikipedia.org]

  • by Chrisq ( 894406 ) on Thursday July 10, 2014 @11:20AM (#47424429)
    Does this mean the death of Minix3? That would be a shame I'd like to have seen a good open-source microkernel OS - a sort of "open source OSX".
    • by Anonymous Coward on Thursday July 10, 2014 @11:31AM (#47424497)

      Despite Prof. Tanenbaum's retirement, the MINIX 3 project will continue as a volunteer-based open-source project. A major new release will be out in the Fall and will include support for the ARM processors and the BeagleBone boards. Check the Website periodically for the announcement.

    • by plasticsquirrel ( 637166 ) on Thursday July 10, 2014 @12:03PM (#47424729)
      Minix 3 will probably keep going as an open-source project, and maybe he will be even more involved?

      I feel it necessary to point out, though, that OS X is not a microkernel system comparable to Minix. OS X is largely monolithic, so if one part of the core system crashes, the whole system crashes. Minix 3 is far more ambitious because everything that is not in the (truly tiny) microkernel runs as a separate server process. For example, drivers are running in their own process, so if a driver crashes, the rest of the system can continue running.

      To manage the system, Minix has a so-called "reincarnation server" that restarts core system daemons if they go down unexpectedly. It's totally modular and redundant -- far more ambitious and advanced in its design than Linux or OS X. Minix is designed from the beginning to never go down. There is nothing else like that in the Unix world.

      This talk by Tanenbaum describes the Minix 3 design in much greater detail:

      Youtube: MINIX 3: a Modular, Self-Healing POSIX-compatible Operating System [youtube.com]
      • I feel it necessary to point out, though, that OS X is not a microkernel system comparable to Minix

        While this is true, it's worth noting that a lot of the compartmentalisation and sandboxing ideas that most of the userland programs on OS X employ (either directly or via standard APIs) have roots in microkernel research. OS X is in the somewhat odd situation of having userspace processes that are a lot more like multiserver microkernels than its kernel...

        • Userspace processes that also function as servers for the microkernel that do most of their heavy lifting with the monolithic BSD skin graft. It's bizarre, ugly, and a nightmare to work with.
      • To manage the system, Minix has a so-called "reincarnation server" that restarts core system daemons if they go down unexpectedly. It's totally modular and redundant -- far more ambitious and advanced in its design than Linux or OS X. Minix is designed from the beginning to never go down. There is nothing else like that in the Unix world.

        QNX?

    • by drerwk ( 695572 )

      Does this mean the death of Minix3? That would be a shame I'd like to have seen a good open-source microkernel OS - a sort of "open source OSX".

      http://en.wikipedia.org/wiki/D... [wikipedia.org] is Open Source OSX. http://www.opensource.apple.co... [apple.com]

    • OSX does have some portions of it with a microkernel architecture, complete with userspace servers, but the vast majority of it deals with talking back and forth to the massive monolithic BSD tumor grafted onto the side of Mach. I guess you could technically call it a "hybrid", but for most intents and purposes, it's a monolithic kernel with some microkernel primitives that are bloody awkward to use.

      Of all the kernel work I've done, there's nothing more vile than working with Darwin, while Minix, complete
    • Minix3 is a BSDL OS. So even if AST stops working on it, others can take the code as it is, and build it further, fork it or do whatever
  • by MAXOMENOS ( 9802 ) <mike&mikesmithfororegon,com> on Thursday July 10, 2014 @11:29AM (#47424483) Homepage
    So...Dr. Tannenbaum's other project is Electoral-vote.com [electoral-vote.com] (2 [wikipedia.org]), an election prediction site (and one of the first). Any clue what's going to happen to that?
  • A great writer (Score:2, Interesting)

    by Anonymous Coward

    I own both "Operating Systems: Design and Implementation" and "Distributed Operating Systems". When I saw the retirement announcement, I cracked them open for the first time in many years to recall how much I learned from them.

    But my favorite of Tanenbaum's works is "Structured Computer Organization". I suppose it may be a bit dated, but I still recommend it to anyone who wants to know how computers work.

    • I found Modern Operating Systems better than the Minix book. The Minix book tells you exactly how a toy OS works in detail. Kirk McKusick's Design and Implementation of the FreeBSD OS (new version due out in a month or two) tells you how a real modern OS works in detail. Modern Operating Systems gives you a high-level overview of how modern operating systems work and how they should work. If you want to learn about operating systems, I'd recommend reading the FreeBSD D&I book and Tanenbaum's Modern
  • Minix on Atari ST (Score:4, Interesting)

    by sbaker ( 47485 ) * on Thursday July 10, 2014 @12:01PM (#47424723) Homepage

    I ran Minix for a year or more on my Atari ST - having a UNIX-like operating system on a machine I could have at home was a truly awesome thing. Tanenbaum's work is fascinating, useful and will be around for a good while...which is more or less the definition of "successful" in academic circles.

    The debates with Linus were interesting - but I always felt that they were arguing at cross-purposes. Linus wanted a quick implementation of something indistinguishable from "real UNIX" - Tanenbaum wanted something beautiful and elegant. Both got what they wanted - there was (and continues to be) no reason why they can't both continue to exist and be useful.

    Tanenbaum's statement that the computer would mostly be running one program at a time was clearly unreasonable for a PC - but think about phones or embedded controllers like BeagleBone and Raspberry Pi? Perhaps Minix is a better solution in those kinds of applications?

    • by hansot ( 2960597 )

      I ran Minix for a year or more on my Atari ST - having a UNIX-like operating system on a machine I could have at home was a truly awesome thing. Tanenbaum's work is fascinating, useful and will be around for a good while...which is more or less the definition of "successful" in academic circles.

      The debates with Linus were interesting - but I always felt that they were arguing at cross-purposes. Linus wanted a quick implementation of something indistinguishable from "real UNIX" - Tanenbaum wanted something beautiful and elegant. Both got what they wanted - there was (and continues to be) no reason why they can't both continue to exist and be useful.

      Tanenbaum's statement that the computer would mostly be running one program at a time was clearly unreasonable for a PC - but think about phones or embedded controllers like BeagleBone and Raspberry Pi? Perhaps Minix is a better solution in those kinds of applications?

      I'm still "using" Minix (currently 1.6.25) both on my ancient Atari 1040 ST and on an Atari ST simulator. Part out of nostalgia of course, but also to remind myself what you could do using a CPU that is about 10000 times slower than current CPU's, running an OS that you could actually understand completely by reading the complete source code. And I am using a cross compiler these days, based on GCC 4.x running on FreeBSD (see www.beastielabs.net/prerel.html) and it is amazing to see what you still can get t

    • I think Minix was much better than Linux at the time - it was better written, cleaner and well-documented. I'm pretty sure Tanenbaum just got tired of the hassle of running the project. He wasn't good at collaborating with subordinates - basically, he wanted to do everything himself. One of Torvald's merits - perhaps the secret of his success - is his ability to get a team round him and share out the work - somewhat surprising since he can be so rude. In practice Tanenbaum refused to extend Minix to 386's,
  • Class Act (Score:5, Interesting)

    by Anonymous Coward on Thursday July 10, 2014 @12:15PM (#47424795)

    I remember when Microsoft paid Ken Brown throug the Alexis de Tocqueville Institution to do a hatchet job on Linus claiming that Linux was stolen from MINIX. Now Tanenbaum, who has criticized the Linux kernel design and had some spirited exchanges with Linus, could have just said nothing and let Linus fend the FUD off by himself, but instead he stepped up and did the honorable thing by decimating Brown's arguments that Linus could have come up with the Linux kernel in just a year and his competency as a researcher/writer.

    http://www.cs.vu.nl/~ast/brown/
    http://www.cs.vu.nl/~ast/brown/rebuttal/

  • This is truly one of the very few profs who can talk about software design.
  • by Anonymous Coward

    In the early 80s, I did a Unix systems startup in the UK: we were an early licensee of Unix from AT&T and sold VAXen with BSD installed and supported. DEC UK hated us. DEC US happily sold us CPUs.

    In April 1983, the European Unix User's Group (EUUG), held a conference in Bonn, Germany. The speakers included Bill Joy, Sam Leffler, Steve Bourne and Andy Tanenbaum.

    It was a hugely memorable event, including Prof. Tanenbaum's presentation. We were paying AT&T $200 or so for each Unix license. Not a huge d

  • by AlexOsadzinski ( 221254 ) on Thursday July 10, 2014 @12:45PM (#47425017) Homepage

    D'oh. Accidentally posted as a Coward and misspelled Prof. Tanenbaum's name. Carry on....

    In the early 80s, I did a Unix systems startup in the UK: we were an early licensee of Unix from AT&T and sold VAXen with BSD installed and supported. DEC UK hated us. DEC US happily sold us CPUs.

    In April 1983, the European Unix User's Group (EUUG), held a conference in Bonn, Germany. The speakers included Bill Joy, Sam Leffler, Steve Bourne and Andy Tanenbaum.

    It was a hugely memorable event, including Prof. Tanenbaum's presentation. We were paying AT&T $200 or so for each Unix license. Not a huge deal for a $100,000 VAX system. But, even then, many of us could see a future where Unix or something like it would run on countless devices, including cars and washing machines. In fact, when I worked for AT&T in 1984 (yes, I know, it was "a learning experience"), I was pitching exactly that to OEMs. It was clear that something cheap or free would be required. So, back in 1983, when dinosaurs ruled the Earth, Prof. Tanenbaum gave us all the seed of a thought that free (as in beer) software could change the world.

    As an aside, his presentation was a little hard to follow, but worth the effort, because his English wasn't that great. A Dutch guy sitting next to me said that his Dutch was pretty sketchy, too. I have no means to verify this but, if true, he would join a small group of my friends and acquaintances who don't speak any (human) language well. They're all engineers :-).

    I also learned that, despite Bonn being largely flooded because of heavy rains, nothing stops a Unix conference, and that the "Geoffnet" signs I saw all over the place weren't a promotion for a new network stack, but meant "Open" in German.

  • by RyuuzakiTetsuya ( 195424 ) <taiki.cox@net> on Thursday July 10, 2014 @12:52PM (#47425079)

    Anyone else laugh themselves stupid at some of the predictions of the future in those posts? The idea that x86 would go away and GNU/Hurd would supplant Linux...

    Predicting the future is REALLY hard.

    • by LWATCDR ( 28044 ) on Thursday July 10, 2014 @01:13PM (#47425245) Homepage Journal

      X86 has gone away. Everyone is using X86-64 and Arm. I would be more Unix like systems are ARM than X86 or X86-64.. So is AMD64 X86-64 orX86/64? I can never remember.

      • Yeah, for mobile, but until the last 4 years, ARM really hasn't been seen as a huge thing. Relatively speaking, this is a new development. Beyond that, x86 is *still* kicking.

        Plus there's that whole bit about GNU/Hurd being the future. :)

        • by LWATCDR ( 28044 )

          Mobile, Routers, NAS, and now servers. ARM is getting very big very quickly.
          In computers Attacks come from the bottom up. PC where a joke and could not hold a candle to a real computer like a PDP-11! Forget about mainframes like the 370!
          It was not HURD at the time but GNU Unix that was going to be the next big thing.
          It wasn't but hey no one is perfect.

      • X86 went away fifteen years ago. Every "x86" COU built since the last 1990s runs an x86 layer, but underneath is a very different bear.

    • Predicting that x86 would go away was more wishful thinking than anything else. At the time, Intel had just switched from pushing the i960 to pushing the i860 and would later push Itanium as x86 replacements (their first attempt at producing a CPU that it was impossible to efficiently compile code for, the iAPX432, had already died). Given that Intel was on its second attempt to kill x86 (the 432 largely predated anyone caring seriously about x86), it wasn't hard to imagine that it would go away soon...
  • I read your book! (Score:4, Interesting)

    by Daniel Hoffmann ( 2902427 ) on Thursday July 10, 2014 @12:57PM (#47425123)

    Really, his books are quite good, I used his the operating systems book in my undergraduate classes. I honestly found reading his book more productive than going to the classes.

  • by Jecel Assumpcao Jr ( 5602 ) on Thursday July 10, 2014 @07:22PM (#47428087) Homepage

    Interesting that he doesn't list Amoeba [wikipedia.org] among his achievements. I find it far more impressive than Minix.

  • The haters and trolls notwithstanding, Minix was a worthy accomplishment; and may yet prove more important in the future than first thought, given Red Hat's ongoing destruction of Linux.

    Professor Tanenbaum is a great man; and truthfully, I have always wished that Linus Torvalds had been kinder to him. Not all of us are necessarily meant to stand fully in the spotlight, and although perhaps both history and the debates proved Linus right, it would not have cost anything to allow the Professor to keep his di

"If it ain't broke, don't fix it." - Bert Lantz

Working...