Andy Tanenbaum Releases Minix 3 528
Guillaume Pierre writes "Andy Tanenbaum announced the availability of the next version of the Minix operating
system. "MINIX 3 is a new open-source operating system
designed to be highly reliable and secure. This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code. The parts that run in user mode are divided into small modules, well insulated from one another. For example, each device driver runs as a separate user-mode process so a bug in a driver (by far the biggest source of bugs in any operating system), cannot bring down the entire OS. In fact, most of the time when a driver crashes it is automatically replaced without requiring any user intervention, without requiring rebooting, and without affecting running programs. These features, the tiny amount of kernel code, and other aspects greatly enhance system reliability."In case anyone wonders: yes, he still thinks that
micro-kernels
are more reliable than monolithic kernels ;-) Disclaimer: I am the chief architect of Globule, the experimental content-distribution network used to host www.minix3.org."
live-CD (Score:5, Informative)
Re:live-CD (Score:2, Funny)
Re:live-CD (Score:3, Informative)
Re:live-CD (Score:3, Informative)
If you want a simpler windowing system you could try MGR [tldp.org].
Re:live-CD (Score:3, Informative)
The core was extremely small and simple. Unfortunately, the apps were few and far between, existing mostly of basic tools like a terminal and a "Mickey Mouse Watch" emulator.
VMware image. Was :live-CD (Score:5, Informative)
Phew (Score:5, Funny)
*pummeling ensues*
GNU/Hurd!! I meant GNU/Hurd!!!
Re:Phew (Score:3, Informative)
True, but you overlooked the fact that I said, "I meant switch over from GNU/Linux!" [emphasis added]. You think you're contradicting me, but you're not.
Linus Torvalds -- then a student of Tanenbaum
Torvalds was never a student of Tanenbaum. Like many computing students the world over, myself included, he learned something about OS hacking using Minix, which is Tanenbaum's baby, but he wasn't a student of Tanenbaum. Tanenbaum teaches at Vrije Unive
Re:Phew (Score:3, Informative)
To be fair, I do prefer the GNU userland to the BSD stuff. None of it would exst without gcc, which is probably the single most important Open Source project in existence.
Love this quote (Score:5, Interesting)
In retrospect that might have been a bit overconfident.
Re:Love this quote (Score:5, Interesting)
Re:Love this quote (Score:2, Insightful)
Comment removed (Score:5, Interesting)
Re:Love this quote (Score:3, Interesting)
Re:Love this quote (Score:3, Insightful)
Re:Love this quote (Score:2, Informative)
Well, this is partly true
Re:Love this quote (Score:3, Insightful)
I'm sorry, but, HUH? Superscalar architectures were really only made possible by RISC platforms. In traditional CISC, the variety of instruction structure made it very difficult to order the instructions for multi-execution. Not to mention that the silicon was already rather convoluted in CISC structures, thus making it even more difficult to fit superscalar execution.
The truth is that RISC won out. And so did CISC. Make sense? Allow me
Re:Love this quote (Score:5, Insightful)
Still.. as fast as modern computers are I think we may be reaching a point where raw speed is less important and well designed microkernels can probably run almost as fast as monolithic kernels. If heavy usage servers can be run as virtual machines in Xen then why not use a microkernel too?
So. Any examples of microkernel OS's that handle heavy server load, function well as a desktop, and can handle multimedia tasks like gaming? OS X uses BSD under a microkernel I think but my experience is that it is slow and the tests I've seen have shown that Linux performs a lot better on it than OS X (no idea if that was due to microkernel use). I'd find it hard to believe that with solid numbers showing that microkernel is just as fast and without additional overhead that someone like Linus wouldn't use it since it's an easier programming model (better for security, stability, etc).
Re:Love this quote (Score:3, Informative)
Re:Love this quote (Score:3, Informative)
Re:Love this quote (Score:3, Informative)
So perhaps they're millikernels?
Re:Love this quote (Score:4, Informative)
Re:Love this quote (Score:4, Informative)
Funny you should mention Xen, because it's essentially a microkernel running other kernels as protected processes.
So. Any examples of microkernel OS's that handle heavy server load, function well as a desktop, and can handle multimedia tasks like gaming?
Other posts mention QNX, so I won't bother.
I'd find it hard to believe that with solid numbers showing that microkernel is just as fast and without additional overhead that someone like Linus wouldn't use it since it's an easier programming model (better for security, stability, etc).
You'd be surprised. There's a lot of vested interest in the current programming paradigms and existing codebase. A principled microkernel architecture [sourceforge.net] might just be incompatible with POSIX, which eliminates a large swath of portable and useful software.
If you want performance, you need look no further than L4 [l4ka.org], EROS [l4ka.org] (and it's successor CapROS [sourceforge.net]). For a principled design, I'd go with EROS/CapROS or the next generation capability system Coyotos [coyotos.org] (who's designers are trying very hard to implement POSIX while maintaining capability security).
Something useful right now, doesn't exist as far as I know.
Re:Love this quote (Score:5, Informative)
The OS X kernel is a different situation. Darwin is a mixture of microkernel and monolithic, as is (for example) Linux. In Linux, a lot of things (like device configuration, etc.) get done in userspace by daemons using a kernel interface, which means the kernel need only contain the code necessary to initialize the device. Darwin's kernel (xnu), however, is a more complex design in terms of overall design (though the internals may be less complex - I'm not a kernel developer), and is derived from Mach 3.0 and FreeBSD 5.0.
Mach provides xnu with kernel threads, message-passing (for IPC), memory management (including protected memory and VM), kernel debugging, realtimeness, and console I/O. It also enables the use of the Mach-O binary format, which allows one binary to contain code for multiple architectures (e.g. x86 and PPC). In fact, when I installed OpenDarwin quite a while ago, all the binaries that came with it were dual-architecture-enabled, meaning I could mount the same drive on PPC or x86 and execute them (which is kind of neat).
The BSD kernel provides (obviously) the BSD layer, as well as POSIX, the process model, security policies, UIDs, networking, VFS (with filesystem-independant journalling), permissions, SysV IPC, the crypto framework, and some primitives.
On top of all that is IOKit, the driver framework. It uses a subset of C++, and the OO design allows faster development with less code, and easier debugging as well. It is multi-threaded, SMP-safe, and allows for hot-plugging and dynamic configuration, and most interestingly of all, some drivers can be written to run in userspace, providing stability in the case of a crash.
Now, as to your comment about performance, it is possible you are referring to the tests done using MySQL a while back, which shows MySQL performance as being (as I recall) abysmal compared to Linux on the same hardware. The problem with that test is that MySQL uses functions that tell the kernel to flush writes to disk. These functions are supposed to block so that the program can't continue until the writes are done and the data is stored on the platter. On OS X, this is exactly what happens, and every time MySQL requests data be flushed, the thread doing the flushing has to wait until the data is on the platter (or at the very least, in the drive's write cache). On Linux, this function returns instantly, as Linux (apparently) assumes that hard drives and power supplies are infallible, and obviously if you're that concerned about your data, get a UPS.
It should be noted that MySQL, in the online manual, strongly recommend turning that feature off for production systems, forcing Linux to block until the write is completed, and lowering performance. I would be interested to see a benchmark comparing the two with this configuration.
This discrepancy in the way Linux handles flush requests vs. the way OS X handles them gives a noticable drop in performance in a standard MySQL situation. I am told that the version that ships with OS X Server 10.4 is modified so as to increase performance while keeping reliability. Unfortunately, I cannot confirm this at this point.
Debugging a high performance messaging system (Score:4, Informative)
Actually, the bigger problem with microkernel is debugging. When passing messages around inside an OS, there is a potential for lots of race states and the like. The trick to microkernel is getting the messages to run around as fast as possible without adding synchronization points. Every synchronization point slows the system a little, but makes the system a little more stable. Once you've optimized the system for performance, and small change to any module the kernel talks to can throw the whole thing out of balance, and you need to go back and debug the race states and retune the code.
In short, a kernel can be fast, flexible, or reliable. You can have two, but it is really difficult to have three. Macro-kernels are generally fast and reliable. Micro-kernels can be fast and flexible, flexible and reliable, but rarely are they fast and reliable.
Re:Love this quote (Score:3, Informative)
Cheriton's V System [wikipedia.org] didn't suck. I used it in a commercial project in the late 1980s and loved it; it met all of your criteria. If Cheriton had open-sourced it, I think it would have had a huge impact. But he didn't, and for whatever reason it hasn't.
Re:Love this quote (Score:3, Interesting)
I'm surprised nobody's mentioned this yet... there was an article in the latest C/C++ User's Journal titled Interprocess Communication & the L4 Microkernel [cuj.com]. Made for interesting reading. The main idea seems to be that traditional microkernel designs spend too much time and effort having the kernel validate
Re:Love this quote (Score:3, Informative)
Re:Thank-you "It's fast enough" Lemmings (Score:4, Interesting)
Sure, but all things are not equal, and an improvement that obviously improves performance, often doesn't. OS programming isn't quite the same as pure user-level programming, so optimizations aren't so clear-cut.
People often assume that monolithic kernels are faster than properly designed microkernels due to less supervisor-user context switching, but this often isn't the largest cost in modern systems. For instance, monolithic kernels can't/don't schedule their interrupt handling through the common scheduler, so they're nearly impossible to use for realtime.
Reliability, flexibility and security is also dramatically improved in decomposed microkernel designs, so really, suffering a 5-10% performance penalty to gain an extra few 9's in reliability (99.99% uptime as opposed to 99%) is often worth it IMO.
Re:BeOS and AmigaOS are microkernels (Score:3, Funny)
Re:Love this quote (Score:3, Insightful)
In retrospect that might have been a bit overconfident.''
Perhaps, but it's true as stated. The consensus among OS designers really was that microkernels were superior. Linus opted for a monolithic kernel, because he didn't believe in Microkernels, but he was the odd one out. Linux's success
Honest question (Score:5, Interesting)
Re:Honest question (Score:2)
Re:Honest question (Score:2)
Re:Honest question (Score:3, Funny)
355 South 520 West
Suite 100
Lindon, UT 84042
Re:Honest question (Score:5, Interesting)
all three are more or less posix compliant operating systems, which means that most software should run on both.
Some software that requires some functionality not found in posix will need to be ported sperately. Like xorg for instance, but, most of the gnu tools will probably run (probably) and there is the question what c library it uses, if it uses its own, there are going to be a whole myrad of other interesting problems, if it uses glibc or bsd's libc, then it's easier.
in other words
Re:Honest question (Score:5, Informative)
Re:Honest question (Score:3, Insightful)
Re:Honest question (Score:3, Informative)
No, Minix used (uses? not sure what the "new" minix uses) the minix filesystem, which was only able to address I think it was 32mb of space.
Linux had the EXT2 filesystem at a later date, migrating away from the Minix filesystem. If you compile your kernel, you'll still see the option to have the minix filesystem functionality compiled in. (or modularized)
I still remember having to decide if I wanted to go with the "new" ext2 filesystem, which will not
Answer from someone who was there (Score:4, Interesting)
Linux is based on MINIX. It was built on MINIX, using MINIX. It started off as Linus's weekend hack to build a 386-specific replacement kernel, so he could have MINIX with pre-emptive multi-tasking and memory protection. Andy Tanenbaum didn't want to make MINIX 386-specific because, like the NetBSD and Debian folks, he was trying to make something that would be portable to lots of different hardware. (Like the Atari ST I was running it on.)
Then there was the big flamewar over monolithic kernels vs modular microkernels. Linus went off in a huff and turned Linux into a complete OS by ripping out all the MINIX and adding all the GNU stuff instead. Then over the years he introduced a modular kernel and made it portable to multiple architectures, basically admitting he was wrong but never saying so.
At that point, Linux started to become usable as an OS. And in the mean time, MINIX had been killed by toxic licensing policies of the copyright owner (not Andy Tanenbaum). That, and the x86 architecture had expanded to 90% of the market. So, we arrived at the situation we have today, where MINIX is largely forgotten, and we have a MINIX-like Linux with all the mindshare.
And now, ironically, Andy Tanenbaum has made MINIX 3 only run on the x86. So perhaps he and Linus can now both admit they were wrong in major respects, and make friends?
Re:Answer from someone who was there (Score:4, Interesting)
There was already a 386-specific 32-bit version of the MINIX kernel around at the time; it was called MINIX-386, unsurprisingly enough, and was widely used in the MINIX hacker community.
There wasn't ever any MINIX code in Linux - there couldn't have been, as MINIX was a commercial product at the time. What there was, was plenty of minor MINIX influences on the design (lack of raw disk devices, "kernel", "fs" and "mm" subdirectories in the kernel source, Minix-compatible on-disk filesystem format, major/minor device numbers etc.) but no major ones (ie. the microkernel paradigm).
Well, yes, you had to pay for MINIX, but there were no free OSs to speak of in those days. The reason MINIX seemed to disappear was that most of the MINIX hacker types were using MINIX because it was the closest thing to real UNIX they could afford. Once Linux appeared, as open source, with its simple goal of being a UNIX clone (rather than a model OS for teaching purposes, as MINIX was meant to be), it was inevitable that most of the MINIX hacker community would migrate en masse.
Re:Answer from someone who was there (Score:3, Informative)
Re:Honest question (Score:5, Informative)
Re:Honest question (Score:5, Informative)
Ah, no, Linus was definately not Tanenbaum's student. Quite aside from the fact that Tanenbaum taught in the Netherlands and Linus studied in Finland, we couldn't have this quote if he was:
However, Linus did admit that at least one academic from his own university shared Tanenbaum's opinions, and thus he was unlikely to be getting high marks anyway. ;-)
As has been pointed to before, you can find an abstract of the famous "Linus vs Tanenbaum" posts to comp.os.minix here [fluidsignal.com].
Re:Tanenbaum blew it (Score:3, Informative)
As to the reason for this, Andy T. had wanted to make the source widely available, but at that time (1990) there wasn't wide-spread access to the Internet (you had to be at a university that was on it, which many weren't, or you had to work for at a government shop). So the only choice was to have it published in the traditional maner
About Tannenbaum. (Score:3, Informative)
http://www.cs.vu.nl/~ast/ [cs.vu.nl]
Yes, it's the same guy who wrote the book for your networking course.
Thank you, Dr. Tanenbaum. (Score:3, Insightful)
Re:Thank you, Dr. Tanenbaum. (Score:5, Funny)
Yes, thank you. My sadistic operating systems professor used your textbook. Your name still gives me nightmares to this day.
Re: (Score:3, Interesting)
Re:Thank you, Dr. Tanenbaum. (Score:4, Insightful)
I really appreciate and love the FSF, GPL, free software, and GNU tools but I'm never going to call Linux GNU/Linux. Let him put his adverts on his own damn stuff rather than trying to convince everyone else to do so. Just because my car sits on a foundation of Goodyear tires doesn't mean I'm going to call it a Goodyear/Focus. Sure that might sound like a new years resolution but it's still dumb and a nightmare in product advertising. I'd be shocked to see Ford advertising it that way.
Re:Thank you, Dr. Tanenbaum. (Score:3, Interesting)
In the case of a typical Linux-kernel based distribution, the bulk of what you interact with (if you take out the GUI), i.e., the part that people see and feel (bash, cp, ls, gcc, emacs (ducks) is all GNU.
Of course, I've left out anything related to the GUI above. To the old-timer
Comment removed (Score:3, Interesting)
Re:Minix on a modern machine? (Score:5, Informative)
Comment removed (Score:5, Informative)
The code is nice (Score:5, Interesting)
"Scivoli": http://www.dedasys.com/freesoftware/ecos.html [dedasys.com]
and an article (in Italian): http://www.dedasys.com/articles/ecos.html [dedasys.com]
for those that don't know (Score:5, Funny)
Re:for those that don't know (Score:3, Funny)
Re:for those that don't know (Score:2, Insightful)
Is there a VMWare disk image? (Score:4, Interesting)
Re:Is there a VMWare disk image? (Score:4, Funny)
Slashdotted but. Any mirrors?
But... (Score:2, Funny)
(points the right pinky finger to the lip and laughs hysterically)
Re:But... (Score:3, Informative)
Why, yes [cs.vu.nl], yes it does.
Im sure nothing will egg him more (Score:2, Funny)
Just think ot the Irony, of the Fater , so to say being listed as a Child.
While not a direct descendent Minix was CERTAINLY the original inspiration for Linus ("Ill tak that as gopsel since Linus himself said so:).
But now what nearly 15-18 years later and well we see no Minix section of Slashdot....
Deja vu (Score:2, Informative)
However, while I agree that microkernels are conceptually smarter, Linux has clearly won the "unix on a PC hardware" contest. But then again, as far as I could tell, that contest was never on AST's agenda anyway. For him the Minix system was a teaching tool.
Just for fun (Score:2)
The Tanenbaum-Torvalds Debate [oreilly.com]
remembering the old tanenbaum vs linus debates.. (Score:2, Interesting)
More reliable (Score:3, Interesting)
Does anybody dispute this?
AFAIK reliability is not the main pressing benefit of a monolithic kernel design so much as being able to scramble all over the internal structs of other kernel modules without needing a context switch, which can be very helpful and quick.
Sam
Old laptops (Score:2, Interesting)
Kim0
Re:Old laptops (Score:5, Informative)
I've found myself in similar situation once, Linux or Solaris wouldn't fit with reasonable amount of useful stuff on a 200M harddrive of an old SUN. Then I managed to fit most of the NetBSD distro, with 2 desktop managers, Netscape Navigator (pre-Moz times), bunch of servers for running a remote diskless workstation and still managed to cut 40M of diskspace for swap memory for that remote workstation
Re:Old laptops (Score:3, Interesting)
X11 port? (Score:3, Interesting)
Which implementation of X is it that is being ported? I would hope that it is X.org, and at least the 6.8.2 release.
Tanenbaum gets a failing grade (Score:5, Insightful)
But, geez, how often do microkernels have to fail before Tanenbaum will admit that there must be something fundamentally wrong with his approach, too? Microkernels attempt to address the right problem (kernel fault isolation), just in such an idiotic way that they keep failing in the real world. But instead of a detailed criticial analysis of previous failures, Tanenbaum and Herder just go on merrily implementing Minix3, apparently on the assumption that all previous failures of microkernels were just due to programmer incompetence, an incompetence that they themselves naturally don't suffer from.
Both Linux-style monolithic kernels and Tanenbaum-style microkernels are dead ends. But at least Linux gets the job done more or less in the short term. In the long term, we'll probably have to wait for dinosaurs like Tanenbaum to die out before a new generation of computer science students can approach the problem of operating system design with a fresh perspective.
Re:Tanenbaum gets a failing grade (Score:5, Interesting)
If the microkernel was combined with a safe language, like Java or C#, then the problems would go away. You wouldn't need to change the page table, so that massive penalty is not there. Accessing memory through a memory object would allow any arbitrary range (down to single bits). You could also apply a filter, so the driver could implement the commands to the disk but the hardware access object would only allow valid use of the bus; this wouldn't be perfect but would greatly increase reliability over microkernels, which are already much more reliable than monolithic.
And speed? It could be faster than C-based code for various reasons (using the dirty bit to accellerate garbage collection, no context switches, etc). It's not like there isn't precendent: the berkely packet filter is actually an interpreted bytecode that is run inside the kernel. It has a number of restrictions to ensure safety (like only branching forwards), but basically in all unix operating systems it is a giant switch statement that interprets the bytecode. This is plenty fast enough to handle the packets, orders of magnitude faster than sending the packets into user-space.
If Tanenbaum really cared about reliability or safety or simplicity he would make a managed microkernel, not more of this C/asm based crap.
agreed 100% (Score:4, Interesting)
It is just astounding to me that while anybody else would be laughed at if they tried to write a modern, complex application in ANSI C, operating system designers are somehow considered special, as if concepts like "abstraction", "error handling", and "runtime safety" didn't matter for kernels that are millions of lines big.
Re:agreed 100% (Score:3, Informative)
Oh, I get it. I'm supposed to distinguish microkernels from other things only in terms of aspects you dislike, allowing you to appropriate anything you do like for "your side" of the debate? Sorry, not falling for it. As I said earlier, both microkernels and OOP are variants of the same basic concept: modularit
Re:Tanenbaum gets a failing grade (Score:3, Informative)
As for microkernels being new, they are not. They have been around
Re:Tanenbaum gets a failing grade (Score:3, Interesting)
Bullshit. He has written whole books on the subject, stating his claims and supporting them with data more than you will ever do in your entire lifetime.
Re:Tanenbaum gets a failing grade (Score:3, Insightful)
The vast majority of what a monolithic kernel does *is* application-like. Keeping track of lists and hashtables of sockets, files, processes. Accessing them in O(1) time. Allocating and deallocating memory for those. Keeping track of buffers and balancing cache sizes. These are all the thing
Re:Tanenbaum gets a failing grade (Score:3, Insightful)
The classic response of someone who has never actually done any serious kernel work. Just because it's not CS-hard doesn't mean it's not real-world-hard, and I think I know a bit about that because I've been solving problems that are hard in both senses for quite a while. Your use of "naysayer" onl
Re:Tanenbaum gets a failing grade (Score:5, Interesting)
How unscientific of you, to draw such an unrelated conclusion from that data. The operating systems you have mentioned have been less prevalent in the marketplace. They have had smaller numbers of users and, more importantly, engineers working on them than, say, Linux. That does not in any way demonstrate that they are more difficult to extend and maintain, as there are plenty of other reasons for what has happened in the market. How do you know that they wouldn't be even more marginal than they are had they been designed as monolithic kernels, or that Linux wouldn't be even more successful if it had been designed as a microkernel? What actual evidence do you have to say otherwise? None.
Wrong. Shared vs. separate address space is an implementation choice; microkernel vs. monolithic is a design choice. That microkernels are typically implemented using separate address spaces is irrelevant. If you had actually worked in operating systems you'd also know that totally shared vs. totally separate are not the only options for address-space relationships. Linux's process/thread/address-space model is based (without attribution, naturally) on something called nUnix, which Dave Mitchell implemented and I inherited at Encore before Linux existed. One of its key ideas is/was that parts of a process could be shared without having to share everything. SysV's shared memory goes back even further than that. Even within the kernel one can implement some forms of inter-component memory protection without resorting to completely separate address spaces. Been there, done that. I was the one who wrote code at Encore to let two operating systems run side by side in the same box, on separate sets of processors with separate memory and exception vectors and - most importantly - so that a memory fault in one wouldn't take the other down. That was done in a slightly different context (one was an RTOS and the other a GPOS) but the same techniques could just as easily be applied to a microkernel.
And how is that "safe, dynamic runtime" not itself a microkernel?
Yeah, I'm such a dinosaur, working here in what is acknowledged as one of the leading-edge areas of the storage industry, implementing a highly available distributed system. Riiight. What do you do that anyone should remember you for, Mr. Expert? In actual fact I don't think it's a good idea to write "millions of lines of C..." when one has a choice. If I were to design another microkernel it would be every bit as current in terms of software-engineering methodology as anything you've ever done, but that doesn't mean it would be implemented as a "managed code" environment. I understand the concept of informed tradeoffs as opposed to mere uninformed dogma. I produce working and shipping systems based on that. Come back when you've done likewise.
OS development doesn't need to be made hard; it's inherently hard. In an OS, everything you do has to be done with an eye toward reentrancy and concurrency, performance and minimal resource consum
Re:Tanenbaum gets a failing grade (Score:3, Insightful)
Not everyone
Re:Tanenbaum gets a failing grade (Score:4, Interesting)
Even if what you say is true, there are plenty of others making very detailed analyses of microkernel architectures, [eros-os.org] and they have reached many of the same conclusions: that microkernels are feasible, useful, and practical, if done correctly. The real problem is that microkernels have a stigma attached to them, and because they are hard to properly design and implement. As such, only researchers have traditionally worked on them, and we all know researchers are not going to develop a full-featured system like Linux. Hopefully, that will change with CapROS [sourceforge.net].
See my other posts in this thread:
Performance [slashdot.org]
Examples of microkernel sytems [slashdot.org]
Recollections (Score:5, Interesting)
Cruising the newsgroups was pretty much the done thing at the time and comp.os.minux was pretty high on my list for obvious reasons. Saw this stuff happening at the time and, knowing that AST was always pretty direct was entertained by the whole flame war thing. Anyway my point is that AST saw MINIX as a OS theory educational tool and Linus saw it as too defective to be even that and as such Linux was better. Funny, I agree with them both, kinda. I could never have kernel hacked Linux like I did MINIX at the time and MINIX could never have become my primary desktop at home like it is now. I guess they were just talking at crossed purposes even then. Pretty much standard flamewar
Tannenbaum still ahead! (Score:5, Funny)
It's all BSD licensed (Score:5, Informative)
This makes it, as far as I know, the only completely BSD licensed Unix-like operating system in the world. Even the big BSDs can't claim that, as they all rely on gcc.
I was in on the Minix beta testing. It's actually extremely impressive. It's quite minimalist; most of the shell commands are pared down to their bare minimum --- for example, tar doesn't support the j or z flags --- and it tends towards SysV rather than BSD with things like options to ps. It runs happily on a 4MB 486 with 1GB of hard drive, with no virtual memory, and will contentedly churn through a complete rebuild without any trouble whatsoever. Slackware users will probably like it.
Driver support isn't particularly great; apart from the usual communications port drivers, there's a small selection of supported network cards, a FDD driver, an IDE-ATA driver that supports CDROMs, and a BIOS hard disk driver for when you're using SCSI or USB or some other exotic storage. The VFS only supports a single filesystem, MinixFS (surprise, suprise!) but allows you multiple mountpoints. In order to read CDs or DOS floppies you need external commands.
There's no GUI, of course.
As a test, as part of the beta program, I did manage to get ipkg working on it. This required a fair bit of hacking, mostly due to ipkg assuming it was running on a gcc/Linux system, but it did work, and I found myself able to construct and install .ipk packages --- rather impressive. Now the real thing's been released, I need to revisit it.
Oh, yeah, it has one of the nicest boot loaders I've ever seen --- it's programmable!
Re:It's all BSD licensed (Score:3, Informative)
Must be a lot of added bloat in there. Minix 1.5 used to run very happily on a PC XT w/ 640K RAM and a 40 MB disk. It would run on a minimal machine w/ as little as 256K RAM and 2 360K floppies. I haven't booted it in a century or so, but I still have an XT with Minix installed on it and a box of 20 or so 360K floppies with binaries and sour
Software? (Score:3, Insightful)
What about software for Minix?
On website there is info about packages - gcc, vim/emacs, old Python, no ncurses, no X... What can I install (by compiling) on Minix and what is not possible and why?
Device drivers: Andrew is living on denial. (Score:5, Interesting)
This is all well and good until the crashing device driver locks the system bus or grams an NMI etc. And what if the device driver in qestion is the one accessing the disk? How does the microkernel recover from that one when it can't access the drive the device driver is sitting upon?
I can see where his thought processes are coming from, but I still think he lives in Computer Science Heaven, I'm afraid, where all hardware is mathematically perfect and I/O never happens (as it's not mathematically provable).
In the real world device drivers hardly ever crash the system 'cos they're kernel mode, they crash it because the hard-hang the system or denigh the kernel the resources to dig itself out of the hole. Neither of these change by moving the code into user space.
Re:Device drivers: Andrew is living on denial. (Score:3, Interesting)
If by "him" you mean Andy Tanenbaum you probably ought to give him the benefit of the doubt, as his position is being represented by some random slashdot person. Maybe just email him.
user mode device drivers (Score:4, Interesting)
With common user systems as cheap and fast as they are now, do user mode device drivers make sense? Is the performance worth giving up for the stability? Check out Microsoft's User-mode Driver Framework [microsoft.com] approach. Here is an old linux journal article [linuxjournal.com] on the subject. Does anyone know of other interesting examples of user mode device drivers on any operating systems?
No source? (Score:3, Interesting)
Re:No source? (Score:3, Informative)
Yes, it is --- it's on a Minix filesystem tucked away at the top of the ISO filesystem. If you boot the CD, you'll get a complete Minix LiveCD based system, with all the source on it.
If you want to access it from Linux you'll need to persuade Linux to parse the partition table on the CD, which it normally won't do --- the easiest way to
Microkernels... (Score:3, Insightful)
Of course he does. Everyone does. The old argument between Linus and Andy was never about reliability. It was about *practicality* and *efficiency*. Microkernels usually incur a lot of overhead. Andy thought the overhead was worth it; Linus didn't.
System Requirements (Score:4, Interesting)
This is supposed to be a simple OS, much simpler than the first version of Linux.
ucLinux can run on 1MB. Older versions can be trimmed enough to run in 200kb even but thats pushing it. Minix now requires 16MB!!! Thats more than ANY BSD out there.
I was interested in running it on MCUs with small ram and flash. Trimming down uCLinux to the extreme uses 200kb of ram by the kernel and one shell. eCos requires under 64kb for simple compilations. eCos is POSIX for the most part, but theres hardly any schedulers in there, and no real filesystem drivers or calls.
Minix is a full OS, but being that simple, I expected the kernel to fit in 64kb ram. I guess I'll use NetBSD as a simpler OS to study before graduating on to Minix 3.
Re:System Requirements (Score:5, Informative)
Globule? (Score:5, Funny)
Translation: "Please load-test my network."
Re:This guy told linus (Score:5, Insightful)
Linus would have deserved that "F" in operating system design, but he wasn't writing his kernel to get grades on a computer course. If he had been then he probably wouldn't have written a crude, monolithic kernel that was totally unportable. Apart from the crudity of it, those were his explicit goals - to write a monolithic kernel that would run optimally on his 80386. (Bear in mind that the Linux kernel we know today is pretty far removed from that early version in design and implementation).
As for AT, he's a very smart guy. He writes books on operating system deign and networking that clearly describe quite complex topics. Even if you don't like the idea of microkernels, the "Operating Systems ..." [prenhall.com] book that describes the Minix kernel is an excellent read.
Educational standards on Slashdot... (Score:5, Interesting)
Minix and his work are key reference works in writing pretty much any OS and his work in computer networking and distribution in paticular are top notch. His stuff is very much NOT Ivory Tower (I speak as someone who has had to do bespoke OS work) and very practical way to build operating systems and overcome networking challenges. Heard of the OSI model for networking? Most of the rest of us have heard of it thanks to Andy's work, because we couldn't afford the official reference from ANSI/ISO.
Out of interest what is you have done?
Re:Mirrors, Mirrors on the wall (Score:2, Informative)
Re:Would be nice if... (Score:3, Informative)
Not sure how current it is, and it might be for 68000-based CPUs if I'm not confusing it with another project.