Torvalds on the Microkernel Debate 607
diegocgteleline.es writes "Linus Torvalds has chimed in on the recently flamed-up (again) micro vs monolithic kernel, but this time with an interesting and unexpected point of view. From the article: 'The real issue, and it's really fundamental, is the issue of sharing address spaces. Nothing else really matters. Everything else ends up flowing from that fundamental question: do you share the address space with the caller or put in slightly different terms: can the callee look at and change the callers state as if it were its own (and the other way around)?'"
Linus Quote (Score:5, Informative)
"In the UNIX world, we're very used to the notion of having
many small programs that do one thing, and do it well. And
then connecting those programs with pipes, and solving
often quite complicated problems with simple and independent
building blocks. And this is considered good programming.
That's the microkernel approach. It's undeniably a really
good approach, and it makes it easy to do some complex
things using a few basic building blocks. I'm not arguing
against it at all."
Re:Linus Quote - "not arguing against it at all" (Score:5, Informative)
"The fundamental result of access space separation is that you can't share data structures. That means that you can't share locking, it means that you must copy any shared data, and that in turn means that you have a much harder time handling coherency. All your algorithms basically end up being distributed algorithms.
And anybody who tells you that distributed algorithms are "simpler" is just so full of sh*t that it's not even funny.
Microkernels are much harder to write and maintain exactly because of this issue. You can do simple things easily - and in particular, you can do things where the information only passes in one direction quite easily, but anythign else is much much harder, because there is no "shared state" (by design). And in the absense of shared state, you have a hell of a lot of problems trying to make any decision that spans more than one entity in the system.
And I'm not just saying that. This is a fact. It's a fact that has been shown in practice over and over again, not just in kernels. But it's been shown in operating systems too - and not just once. The whole "microkernels are simpler" argument is just bull, and it is clearly shown to be bull by the fact that whenever you compare the speed of development of a microkernel and a traditional kernel, the traditional kernel wins. By a huge amount, too.
The whole argument that microkernels are somehow "more secure" or "more stable" is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure."
Re:Linus Quote - "not arguing against it at all" (Score:5, Interesting)
Individual pieces aren't really any simpler either. In fact, if you want your kernel to scale, to work well with lots of processes, you are going to run into a simple problem: multitasking.
Consider a filesystem driver in a monolithic kernel. If a dozen or so processes are all doing filesystem calls, then, assuming proper locking and in-kernel pre-emption, there's no problem - each process that executes the call enters kernel mode and starts executing the relevant kernel code immediately. If you have a multiprocessor machine, they could even be executing the calls simultaneously. If the processes have different priorities, those priorities will affect the CPU time they get when processing the call too, just as they should.
Now consider a microkernel. The filesystem driver is a separate server process. Executing a system call means sending a message to that server and waiting for an answer. Now, what happens if the server is already executing another call ? The calling process blocks, possibly for a long time if there's lots of other requests queued up. This is an especially fun situation if the calling process has a higher priority than some CPU-consuming process, which in turn has a higher priority than the filesystem server. But, even if there are no other queued requests, and the server is ready and waiting, there's no guarantee that it will be scheduled for execution next, so latencies will be higher on average than on a monolithic kernel even in the best case.
Sure, there are ways around this. The server could be multi-threaded, for example. But how many threads should it spawn ? And how much system resources are they going to waste ? A monolithic kernel has none of these problems.
I don't know if a microkernel is better than monolithic kernel, but it sure isn't simpler - not if you want performance or scalability from it, but if you don't, then a monolithic kernel can be made pretty simple too...
Re:Linus Quote - "not arguing against it at all" (Score:5, Interesting)
That's a pretty big assumption. Or rather, you have basically taken all the hard parts of doing shared code and said "Let's hope someone else already solved this for us".
Sooooo, it's easy to have someone else handle the multi-process bits in a monolithic design. But when it comes to writing services for microkernels suddenly everyone is an idiot?
Besides, as Linus pointed out, when data is going one way microkernels are easy. And in the case of file systems that is really the case. Sure multiple processes can access it at once, but the time scale on handling the incoming signals is extremely fast compared to waiting for data from disk. Only a really, *really* incompetent idiot would write such a server which blocked until the read was finished.
Re:Linus Quote - "not arguing against it at all" (Score:4, Funny)
This sounds like a veiled reference to something; would you care to name it?
Re:Linus Quote - "not arguing against it at all" (Score:3, Funny)
1) [root@compy1
2) Unplug cable (power or network) to COMPY2
3) [root@compy1
It will be done right when Duke Nukem: Forever finishes installing.
Re:Linus Quote - "not arguing against it at all" (Score:3, Informative)
Try
mount -o soft compy2:/share
or
mount -o intr compy2:/share
instead
man 8 mount
Multiple-processes: micro vs monolithic (Score:3, Insightful)
Re:Multiple-processes: micro vs monolithic (Score:5, Funny)
This micro-post shows a division into seperable units.
Using message passing, I can efficiently communicate this to you.
Note that other readers may be reading different sections of my post while you read this one.
This section of my post never has to access internal structures of the other sections. In fact, I could have written each section in any order. Feel free to reorder them yourself.
Re:Linus Quote - "not arguing against it at all" (Score:2, Interesting)
Both approaches have their merits, and in the real world you will never see a purely central organisation or a pure
Re:Linus Quote - "not arguing against it at all" (Score:5, Insightful)
The analogy of centralisation vs. local autonomy is not totally accurate either. Both the monolithic and the microkernel are centralized, except that in the first case there a large beaurocratic structure and in the second case it just a dictator and a couple of "advisors". If the dictator or the king is chosen well, the system will be more predictable and will work much better. If case of the large beaurocratic system, if some of its members get corrupted [and they will because there are so many of them] the whole system will fail. It is like saying that a small bug in the mouse driver will freeze and crash the system with a monolithic kernel. Good thing if the system was only running Doom at the time and not controlling a reactor, or administering a drug. If the same happens in the microkernel system, the kernel will reload the driver, raise an alarm, or in general -- be able to take the system to a predictable predetermined state. Going back to the analogy is it is like having the dictator execute a corrupted staff member and replace him immediately.
Proving correctness & why it doesn't work (Score:3, Insightful)
The problem is that it doesn't match the way most people work right now.
Check out this brilliant paper by Alistair Cockburn (spoken as Co-burn) - Characterizing People as Non-Linear, First-Order Components in Software Development [cockburn.us]. Over and over in this paper he says:
Microkernels can still be optimized (Score:3, Insightful)
Agreed, but these are the same people who don't really care if they're at the bleeding edge of technology and have to deal with super-l33t video driver 0.8.2b crashing every so often.
It is just a false premise, just because some monolithic (or hybrid) kernels are unreliable does not mean
Re:Microkernels can still be optimized (Score:3, Insightful)
It is not proven at all and the top kernel programmers in the world agree. If they didn't, this discussion wouldn't be occuring. If you mean that the code is simpler to audit because there is less of it you are mistaken. A microkernel architecture only involves less code in the kernel, if you consider all the code needed to provide the functionality
Re:Linus Quote - "not arguing against it at all" (Score:4, Insightful)
This is very true.
Consider a filesystem driver in a monolithic kernel. If a dozen or so processes are all doing filesystem calls, then, assuming proper locking and in-kernel pre-emption, there's no problem - each process that executes the call enters kernel mode and starts executing the relevant kernel code immediately.
OK, here's where things start getting a little tricky. The whole locking setup in a monolithic kernel is pretty tricky. Early multi-processor kernels often took the course of "one big lock" at the top of the call stack - essentially only one process could be executing in the kernel. Why? Because all that "proper locking" is tricky. Took years to get this working right. Of course it's done now in Linux so you can take advantage of it, but it wasn't easy.
Now consider a microkernel. The filesystem driver is a separate server process. Executing a system call means sending a message to that server and waiting for an answer.
OK, now here, you're kind of running off the rails. What is a "message"? There is no magical processor construct called a "message" - it's something that the OS provides. How messages are implemented can vary quite a bit. What you're thinking of is a messaging system ala sockets - that is the message would be placed onto a queue and then a process switch would happen sometime and the server on the other end would read messages out of the queue and do something. That's how microkernels are usually presented conceptually so it tends to get stuck in peoples' heads.
However, messages can be implemented in other ways. For example, you could make a message be more like a procedure call - you create a new stack, swap your address table around, and then jump into the function in the "server". No need to instantiate threads in the "server" anymore than there is a need to instantiate threads within a monolithic kernel. The server would essentially share the thread of the caller. I've worked on microkernel architectures that were implemented just this way.
If the number of data structures that you can directly access is smaller, the amount of locking that you have to take into account is smaller. Modularity and protection makes most people's tasks easier.
Many of the arguments made for monolithic kernels are similar to the arguments you used to hear from Mac programmers who didn't want to admit that protected memory and multi-tasking were good things. Mac programmers liked to (as I used to say) "look in each other's underware". Programs rummaged about through system data structures and other apps data structures sometimes, changing things where they felt like it. This can be pretty fun sometimes and you can do some really spiffy things. However, set one byte the wrong way and the whole system comes crashing down.
Re:Linus Quote - "not arguing against it at all" (Score:4, Insightful)
Well maybe that's how *you* would design *your* Microkernel. And yes, it would suck.
The way I would design the filesystem driver, would be to accept a request, add it to a queue of pending requests to serve. If there are no initiated requests, find the request that can most efficiently be served based upon your preferred policy (closest seek time, for example, or first come first serve, your choice), and initiate that request. Add some smarts for multiple devices, so multiple requests can be initiated at the same time to different devices. When data comes back, answer the requesting process with their data. Rather than sitting around blocking on a request, go grab more requests from other processes and queue them up. No need to block. When an initiated request comes back, send back the data to the requesting process, and everyone's happy. Just because things are separated out into different processes, doesn't mean that they can't do some asynchronous juggling to be efficient. Add multi-threading, and the coding becomes a bit easier; but multi-threading isn't necessary to rely upon to have this work well.
I'm pretty sure the monolithic kernels do things somewhat similarly; build a request queue, service that queue. They could also block until they're done other requests, but that would be bad design. Don't assume a Microkernel Filesystem server has to suffer from similarly bad design.
Continuation Passing Style (Score:3, Insightful)
There are some ways [haskell.org] to convert normal function calls to CPS.
And there is something called monads [wikipedia.org] used to convert imperative algorithms to functional style.
And yes, continuations [mit.edu] can be a very powerful technique.
However, CPS functional code is still coding an algorithm. Any way to compute something is an algorithm. May be you should name your critic "I dislike imperative algorithms, and I like CPS functional algorithms."
Distributed not that hard. (Score:4, Interesting)
I think you're looking at this the wrong way around.
There has been a lot of research into this over the past 40 years, ever since Dijkstra first talked about coordination on a really big scale in the THE operating system. Any decent CS program has a class on distributed programming. Any decent SW architect can break down these different parts of the OS into weakly-connected pieces that communicate via a message passing interface (check out this comment [slashdot.org] by a guy talking about how Dragonfly BSD does this).
It's obvious that breaking something like your process dispatcher into a set of processes or threads is silly, but that can be easily separated from the core context switcher. Most device driver bottom halves live fine as a userland process (each with a message-passing interface to their top-halves).
If you're compiling for an embedded system, I'm sure you could even entirely remove the interface via some #define magic; only debug designs could actually have things in separate address spaces.
The point I'm trying to make is: yes, you can access these fancy data structures inside the same address space, but you still have to serialize the access, otherwise your kernel could get into a strange state. If you mapped out the state diagram of your kernel, you'd want the transistions to be explicit and synchronized.
Once you introduce the abstraction that does this, how much harder is it to make that work between processes as well as between threads in the kernel? How much of a benefit do you gain by not having random poorly-written chunks pissing over memory?
How about security benefits from state-machine breakdowns being controlled and sectioned off from the rest of the machine? A buffer overflow is just a clever way of breaking a state diagram and adding your own state where you have control over the IP; by being in a separate address space, that poorly written module can't interact with the rest of the system to give elevated privileges for the attacker (unless, of course, they find flaws in more of the state machines and can chain them all together, which is highly unlikely!).
Clearly there is a security benefit as much as there is a consistency benefit. Provably correct systems will always be better.
Re:Distributed not that hard. (Score:5, Interesting)
Well, I could certainly argue THAT one.
Years ago, I was a lead analyst on an IV&V for the shutdown system for a nuclear reactor - specifically, Darlington II in Ontario, Canada.
This was the first time Ontario Hydro wanted to use a computer system for shutdown, instead of the old sensor-relay thingie. This made AECB (Atomic Energy Control Board) rather nervous, as you can understand, so they mandated the IV&V.
I forget his first name - but Parnas from Queen's University in Kingston had developed a calculus to prove the correctness of a programme. It was susinct, it was precice, it was elegant, and it worked wonderfully.
ummmmm
OK, so we prove that the programme is correct, and it'll do what it's supposed to do
You see, everybody had kinda/sorta forgot that this particular programme not only had to be correct, but it had to tell you that the reactor was gonna melt down BEFORE it did, not a week afterwards.
The point is, that there is often much more involved in whether or not a programme (or operating system) is usefull than it's "correctness"
Re:Distributed not that hard. (Score:3, Insightful)
Sort of. In the scanerio you describe, the program was, in fact, not proved to be correct because the people who did the proof failed to take into account the real requirements for the system.
If you don't even know your requirements, no methodology to implement those requirements is going to work reliably.
Re:Distributed not that hard. (Score:4, Informative)
If you don't even know your requirements, no methodology to implement those requirements is going to work reliably.
I agree absolutly.
The point I was trying to make - and this is where I see the parallel - is that T seems to be trying to say that microkernel good, monolithic bad based only on elegant design, and theoretical simplicity. No doubt, it appeals to the academic in him (Go figure)
But he is ignoring the "time domain" of an operating system, if you will - it's practicality, it's ability to do usefull work in a reasonable period, and it's usability in the real world - just as Parnas' notation did.
I make no claim as to whether or not Parnas *intended* for his notation to be used for a hard real-time system - I know he was retained as a consultant on the project, but I personally neither saw nor heard of/from him during the entire time. And let me be perfectly clear - his notation was absolutly *gorgeous* and extremly usefull. I, on the other hand, having been the idiot who raised the point in the first place, wound up having to do the timing analysis based on best/worst case sensor timings, and instruction-by-instruction counting of the clock cycles required for each ISR, etc. I plugged the numbers into a programme I wrote for the purpose, and basically did nothing more than an exhaustive analysis of all possible combinations of timings. Not as elegant by far, but what the hell. Who cares if it takes two days to run, if it means you don't have to worry about glowing in the dark?
Re:Distributed not that hard. (Score:4, Interesting)
Case in point - again, from the same IV&V.
The boys at Hydro had no idea wtf they were doing. They wern't incompotent by a long shot - they were very good, very bright, and very conscientious. But their background was basically analog design (all electrical engineers), and they wern't overly familiar with software - and it showed.
When I was going over the spec and their timing measurements (the requirements were stated in the form of "maximum time from A to B shall be XXX milliseconds max" etc) on my initial perusal, I came across the statement that one particular sensor was required to react "in a reasonable period of time".
I nearly shit my pants. We're talking about a reactor shutdown system here
Re:Distributed not that hard. (Score:3, Interesting)
So correct systems will always be better, because you know it is correct and you know the limits (want it to run faster -- just buy faster h
Re:Distributed not that hard. (Score:4, Insightful)
You're right, it *doesn't* say anything about the code efficiency, or the runtime per se
The microkernel may be more elegant, more pristine in the lab
I'm sorry, I'm with Linux on this one.
Provably correct doesn't mean "good"
Also, there is nothing about a microkernel that makes it more inherently provably correct than a monolithic kernel. Even going back to Parnas' notation that we used years ago, and thinking about the structure of the Linux kernel, it would be pretty easy to go through the exercise and prove it correct/incorrect
Re:Distributed not that hard. (Score:5, Interesting)
Today most of the software that is used to fly planes (both fighter jets and passenger) is based on a microkernel architecture. So microkernels are not just lab toys, real and mission critical systems are run by microkernel architectures.
The speed problem can often be solved just buy getting a faster hardware. The main reason Linus rejected microkernels back in the day was because the cost of context switches was prohibitive. Today hardware is lot faster (roughtly Moore's law), so context switches will be alright on a 3GHz Pentium IV machines while it would not be doable on a 33Mhz machines.
Also, there is nothing about a microkernel that makes it more inherently provably correct than a monolithic kernel.
Theoretically you are right. But in practice Linux 2.6 is 6 million lines of code and a typical microkernel is less than 10k. It can already take up to a year to check the correctness of a 8k lines of code microkernel and there will be an exponential demand for resources as the code size increases. So in reality it will not be possible to check the linux kernel for correctness.
Re:Distributed not that hard. (Score:5, Insightful)
Umm, doesn't that mean while you've prooved that the 10k microkernel lines correct, you'd still have ~6 million lines of code sitting outside the microkernal waiting to be prooved? I can't see how a microkernel can magically do with 10k everything Linux is doing with 6 million lines (especially as by the definition of microkernel, than there's no way it could).
Re:Distributed not that hard. (Score:5, Insightful)
Re:Distributed not that hard. (Score:3, Informative)
And where did I say that microkernels were unusable? I've personally used QNX and VRTX myself. For small, simple (for the O/s) systems, they're beautifull. As a general purpose coputing platform, they tend not to be.
The speed problem can often be solved just buy gettin
Your definition?? (Score:2)
Re:Linus Quote - "not arguing against it at all" (Score:3, Insightful)
Here, it seems, the means justify the ends. Linus basically says "I won't take any challenges".
Linus tells me that we can never write a proper scalable OS for a NUMA machine, or a modular system that can serve well to parallel I/O systems and the like. I highly disagree.
Because these things are not pipe dreams, they have been done. IBM guys have made amazingly abstract and modular OS stuff and they've been using the
Re:Linus Quote - "not arguing against it at all" (Score:3, Interesting)
I know as a good and faithful /.-ers we should worship Linus and take all of his words as gospel, but in this case I think he is talking out of his arse. Microkernels are "more secure" and "more stable" because only one component needs to work well -- the microkernel, it's main job is to enforce sec
Re:Linus Quote (Score:2)
Re:Linus Quote (Score:2)
Not unexpected... (Score:5, Informative)
Re:Not unexpected... (Score:2)
Taking a
pfff (Score:4, Funny)
Code talks (Score:5, Insightful)
Linus is a pragmatist. He didn't write Linux for academic purpose. He wanted it to work.
But you can always prove him wrong by showing him the code, and I bet he'd be glad to accept he was wrong.
Re:Code talks (Score:3, Informative)
1. Windows has some essential system services running in the user space, such as Win32 environment (csrss.exe). But if it dies, you are pretty much hosed anyways. It doesn't necessarily make the system more stable in any meaningful way by running stuff in user space. Windows even had GDI in user space before, and later moved into the kernel for performance reasons, and GDI in user space didn't provide more stability.
2. Linux kernel 2.6 now has support for user space filesyste
Re:Code talks (Score:3, Insightful)
Windows even had GDI in user space before, and later moved into the kernel for performance reasons, and GDI in user space didn't provide more stability
You have to be joking. It was a massive step backwards in stability. NT 3.51 was rock solid, NT 4 was far more flaky.
Not only that, but with GDI in kernel all sorts of resource limits came in that just weren't there before. Writing heavy graphics was much more of a pain - no matter how careful you were with GDI, other programs could consume limited kernel
Re:Code talks (Score:4, Informative)
Get your facts straight.
Every popular Operating System developed in the past 15 years (and then some) apart from Linux has been either a microkernel [wikipedia.org] or a hybrid kernel [wikipedia.org].
Mach, upon which Darwin and OS X are based is a microkernel. OSX and Darwin borrow some monolithic-esque features, but not quite enough to make them hybrids it would seem...
Windows NT, NetWare, ReactOS and BeOS are all Hybrid kernels. This model seems to be the most popular right now, and seems to be a reasonable compromise...
The only thing that's left are the old big-iron Unices, Solaris, MS-DOS, and Linux. In other words, Linux is the only major player left using a monolithic kernel. I don't know enough about computer science to properly make an argument one way or another, but it would see that monolithic kernels have heavily fallen out of favor in the past 15 years.
That said, perhaps a monolithic kernel is better suited to the open-source development process, which would seem counterintuitive at first because it discourages modularization, but who knows.... it could very well be true. I don't know enough to comment.
Re:Code talks (Score:5, Interesting)
Windows NT is monolithic. So is OS X. Anyone claims it to be microkernel please show me the proof other than "it is based on Mach".
Re:Code talks (Score:5, Insightful)
One of the biggest problems I continually have with technical people (whether that's computer techs or engineers) is that they tend to overemphasize the syntax and semantics of what people say. They tend to latch on to a specific phrase and then rip it apart rather than taking the meaning of the whole (which is the important part) and finding problems in the whole. Most particularly, they tend to find it incomprehensible that a single phrase might have multiple meanings.
Part if this is doubtlessly due to exposure to highly precise technical jargon, but it is inappropriate to apply strictness of meaning inherent to, say, Python, to everyday language. Even in a technical debate.
A hybrid kernel in simplest terms is a kernel is a combination of two discrete other types of kernels. Plain English tells you that. It makes no sense to try to wrestle with whether WinNT is a monolithic or microkernel. It's a semantic debate that serves only to label the object, and it doesn't describe it or aid in understanding it. If you say WinNT is a microkernel, you then have to ignore the non-essential code objviously running in kernel mode and that doesn't help understanding. If you say WinNT is a monolithic kernel, you have to ignore the userland processes that are really system services. Again, that's no aid to understanding.
Stop complaining about the language and forcing labels on things. Labeling is not understanding.
Re:Code talks (Score:2)
Re:Code talks (Score:2)
Linux has a userspace filesystem. Does that make it a Hybrid too?
hehe
What you may be experiencing right now is popularly known as Cognitive Dissonance [wikipedia.org], or the difference between marketing and reality.
Hybrid kernels??? (Score:4, Informative)
MacOS X is no microkernel system. It does have Mach, sure. Mach is arguably not a microkernel by today's standards, and in any case MacOS X has a full BSD kernel bolted onto the Mach kernel. Mach and BSD are sharing address space. In other words, it's not a microkernel.
NT is the same way.
I don't know all that much about NetWare, but I'd never before heard anyone claim it to be a microkernel. It's not terribly popular anyway. (it was, but back then I'm sure it wasn't a microkernel system) ReactOS isn't much yet. BeOS died for unrelated reasons, so we really can't judge.
Monolithic kernels can be very modular. Microkernels can get really convoluted as the developers struggle with the stupid restrictions.
Re:Hybrid kernels??? (Score:5, Interesting)
True, the kernel of MacOS/X - Darwin, aka XNU, for performance reasons run the Mach and BSD layer both in superuser space to minimize the lattency.
Maybe this is what you call a hybrid kernel: http://en.wikipedia.org/wiki/Hybrid_kernel [wikipedia.org]
You may call XNU whatever you wish but the fact remains:
- it's not a monolithic kernel by design
- it has Mach in it and Mach is some sort of microkernel. Maybe it does not reach "today's" standards of being called a microkernel but it was a very popular microkernel before.
So maybe the things running on top of Mach ( http://developer.apple.com/documentation/Darwin/Co nceptual/KernelProgramming/index.html [apple.com] ) are conceptually "different" from what the services of microkernel should be, and they do share indeed the address space, but this is very very very different from the architecture of a traditional monolithic kernel such as Linux
This guy ( http://sekhon.berkeley.edu/macosx/intel.html [berkeley.edu] ) recently tested some stats software on his Mac running OS X and Linux, and found out that indeed MacOS X had performance issues, very likely due to the architecture of the kernel.
There's even a rumor that says that since Avie Tevanian left Apple ( http://www.neoseeker.com/news/story/5553/ [neoseeker.com] ), some guys are now working for removing the Mach microkernel and migrate to a full BSD kernel in the next release of the operating system.
And now my personal touch. I agree with Linus when he says that having small components doing simple parts on their sides and putting them together with pipes and so on, is somehow the UNIX way and is attracting (too lazy to find the quote). However as he demonstrates later, distributed computing is not easy, and there's also the boundray crossing issue. I guess he has a point when he says this is a problem for performance and the difficulty on designing the system... So if performance is what you indeed expect from a kernel, then you must stop dreaming of a clean-centralized good software architecture like those we have for our high oo-oriented software.
But the truth is that, although developing a monolithic kernel is an easier task to do from scratch than a microkernel, I guess the entry ticket (learning curve) for a monolithic kernel developes is more expensive. The main reason being, "things ARE NOT separated". Anyone, anywhere in the kernel could be modifying the state of that thing, for non obvious reason, even if there's a comment that says "please don't do that" or it shoulld not be the case etc.... Microkernel can obviouisly provide some kind of protection and introspections to these things, but have always hurt performances to do so.
Now it has everything to do on what you expect. Linux has many many many developpers and obviously can afford having a monolithic design that changes every now and then and you may prefer a kernel that goes fast than one whose code is clearn, well organized and easy to read. But the corrolary of that observation is that for the same reasons, grep, cat, cut, find, sort, or whatever unix tools you use with pipes and redirection are similarly a cleaner but YET INEFFICIANT design. However, it's been proven (with time) to be a good idea..
I think things that are "low level" will be bound to have a poor spagehtti software architecture because performance matters and the code is smaller.. but the higher level you go, the less performance matters, and the more code maintenance and evolutivity matters... Everything is a tradeof: good design practice depends on the type of problems your software tackles.
That said, it does not mean no progress can be made in kernel developments. Linux already uses a somewhat different C lang
Re:Hybrid kernels??? (Score:4, Informative)
"xnu is not a traditional microkernel as its Mach heritage might imply. Over the years various people have tried methods of speeding up microkernels, including collocation (MkLinux), and optimized messaging mechanisms (L4)[microperf]. Since Mac OS X was not intended to work as a multi-server, and a crash of a BSD server was equivalent to a system crash from a user perspective the advantages of protecting Mach from BSD were negligible. Rather than simple collocation, message passing was short circuited by having BSD directly call Mach functions. While the abstractions are maintained within the kernel at source level, the kernel is in fact monolithic. xnu exports both Mach 3.0 and BSD interfaces for userland applications to use. Use of the Mach interface is discouraged except for IPC, and if it is necessary to use a Mach API it should most likely be used indirectly through a system provided wrapper API."
I was a microkernel developer (Score:4, Informative)
The learning curve was very steep. New developers took at least half a year to be productive. A number of people never became productive and had to be fired.
Linux is really clean and tidy compared to that. Even BSD is clean and tidy compared to that microkernel OS.
Separated components tend to get complex interactions. Sharing data can be very awkward, even if you are co-located.
Comment removed (Score:5, Interesting)
Re:Code talks (Score:2)
Get your facts straight.
Every popular Operating System developed in the past 15 years (and then some) apart from Linux has been either a microkernel or a hybrid kernel.
Exactly, Linus doesn't know what he's talking about. He should know better than to be posing as some kind of credible source. Someone should write him an e-mail and let him know that he should have defered to someone more knowledgeable.
it could very well be true. I don't know enough to comment.
Sure enough.
Re:Code talks (Score:5, Insightful)
Small, fast, real-time. http://en.wikipedia.org/wiki/QNX [wikipedia.org]
Re:Code talks (Score:4, Insightful)
The whole discussion of Windows vs anything else is totally pointless. All popular OSes are Windows.
Linus is a pragmatist. He didn't write Linux for academic purpose. He wanted it to work.
That's true, and that's a good point. However, it's much easier to start a project if you already have some good people, even if the code is entirely from scratch. Therefore, making the point in a place like the kernel development lists is a good idea, because that's a good place to recruit people.
Certainly in the case of OSes, there really isn't much of an opportunity for something like Linux to emerge from one person's efforts. As far as I can tell, Linux originally worked because enough people were interested in helping him early on in a hobby, doing things like sending him an actual copy of the POSIX specs, and he was mostly able to get it to where it actually duplicated Minix's functionality, and exceeded it in some cases.
In fact, there was such a shortage of good OSes for this machine that really, Linux succeeded because it wasn't Minix and wasn't DOS. In fact, one has to wonder -- could an open Minix have done what Linux did? It's possible that, given all the programmers who eventually decided to work on Linux, the problems with microkernels could've been solved. Similarly, if all the programmers working on Linux suddenly decided to do a microkernel, it would succeed and it would replace Linux.
But, that isn't going to happen. Not all at once, and probably not ever, unless it can be done incrementally.
Re:Code talks -- but some code flies! (Score:2)
Most of the critical RT operating systems will have some kind of a ukernel architecture. velOSity is one such ukernel, it is used by the INTEGRITY OS made by Green Hills. The next time you fly on the plane it will probably be a ukernel that will make sure you land safely, not a monolithic bl
Windows is monolithic (Score:3, Informative)
Windows and Linux are not THAT different as far as kernel architecture is concerned.
Re:Windows is monolithic (Score:3, Funny)
Re: (Score:2)
Re:Windows is monolithic (Score:5, Funny)
Re:Windows is monolithic (Score:2)
That's funny. Microsoft has one of it's layers labeled "Microkernel." I guess it's a hybrid OS after all.
Re:Windows is monolithic (Score:2)
Re:Windows is monolithic (Score:3, Informative)
http://www.microsoft.com/resources/documentation/w indowsnt/4/workstation/reskit/en-us/archi.mspx?mfr =true [microsoft.com]
Also, Microsoft admits that microkernels are more dependable and show their research here:
http://research.microsoft.com/research/pubs/view.a spx?type=technical+report&id=989 [microsoft.com]
If you decide to look at the PDF, you can go straight to page 7 for the kernel architecture.
Re:Windows is monolithic (Score:5, Informative)
1. From AST (I'd assume you know who he is since you are interested in Linus/microkernel debate): http://www.cs.vu.nl/~ast/brown/followup/ [cs.vu.nl] Read the section "Microkernels Revisited":
I can't resist saying a few words about microkernels. A microkernel is a very small kernel. If the file system runs inside the kernel, it is NOT a microkernel. The microkernel should handle low-level process management, scheduling, interprocess communication, interrupt handling, and the basics of memory management and little else. ... Microsoft claimed that Windows NT 3.51 was a microkernel. It wasn't. It wasn't even close. Even they dropped the claim with NT 4.0.
2. From Windows Internals, the 4th edition, published by Microsoft Press. Page 36: Windows is similar to most Unix systems in that it's a monolithic operating system in the sense that the bulk of the operating system and device driver code shares the same kernel-mode protected memory space. Can we stop claiming Windows has a microkernel now?
Re:Say what?! (Score:2)
It's not really a microkernel. (Score:4, Insightful)
MacOS has that sharing address space with the monolithic BSD kernel. So a semi-microkernel and a monolithic kernel are firmly bolted together. That's only a microkernel if your degree is in marketing.
Mirror (Score:4, Informative)
Now that would be nice (Score:3, Funny)
Any chance we could do this with my long distance phone service?
Re:Now that would be nice (Score:2)
slashdotted, pastebin copy of interview (Score:5, Informative)
comments i liked (Score:3, Insightful)
A lot of work (Score:2)
PS, besides porting all parts of the kernel you first need to redesign the kernel so it can cope with the micro kernel idea (and structural limitations).
As soon as GNU Hurd is mature we'll have a drop-in replacement (right?).
already been done (Score:3, Informative)
Microkernels and the future of hardware (Score:4, Insightful)
By 2010 I suspect at least desktops are 4-CPU systems and as the numbers of cores increase one of the large drawbacks of microkernels raises it's ugly head: microkernels turn simple locking algorithms into distributed computing-style algorithms.
Every game developer tells us how difficult it is to write multi-threaded code for even our monolithic operating systems (Windows, Linux, OSX). In microkernels you constantly have to worry how to share data with other threads as you can't trust them to give even correct pointers! If you would explicitly trust them, then a single failure at any driver or module would bring down the whole system - just like in monolithic kernels but with a performance penalty that scales nicely with the number of cores. What's even worse is that at a multi-core environment you'll have to be very, very careful when designing and implementing the distribution algorithms or a simple user-space program could easily crash the system or gain superuser privileges.
Re:Microkernels and the future of hardware (Score:4, Interesting)
Don't take C's poor support for threading and tools to build/debug threaded code to mean that writing threaded code isn't possible. Other platforms and languages have taken threads to great extremes for many years, and I'm not necessarily referring to anything Unix (or from Sun).
This reminds me of the story (but I don't know how true it is) that in the early days of Fortran, the quicksort algorithm was widely understood but considered to be too complicated to implement. Now 2nd year computer science students implement it as a homework project. Threads could be considered similar. Anyone who has written a servlet is implicitly writing multithreaded code and you can very easily/quickly write reliable and safe threaded code in a number of modern languages without having to get into the details C forces you into. It's the mix of pass-by-reference and pass-by-value with only a bit of syntactical sugar that creates the problems, not the concepts of parallelism.
On the other hand, I agree with you that we'll see increased parallelism driving increases in computing capabilities in the coming years. It was mathematically proven some time ago, but Amdahl's law is now officially giving way to Gustafson's law [wikipedia.org] (more on John Gustafson here [sun.com]). Since software codes are sufficiently complex these days (even the most simple of modern programs can make use of parallelism-- just think of anything that touches a network), it's those platforms that exploit this feature which stand to deliver the best benefits to it's users.
Re:Microkernels and the future of hardware (Score:3, Insightful)
The early days of Fortran were before the 70's. Given the extremly tight ram constraints you'd probably have to implement a non-recursive iterative form which is FAR more complex. And this is Fortran we're talking about, not known for being the cutest language out there - and if we're referring to pre fortran66 then you're only branching construct is the 3way arithmatic IF statment. Now given that and considering your only method of debugging is taking a heap dump and looki
Linus not that far for true... (Score:3, Interesting)
I worked two years for a society that was developing its own micro-kernel system, for embedded targets. I was involved in system programing and adaptation of the whole compiler tools, based on GCC chain.
Linus is right: basic problem is address space sharing, and if you want to implement memory protection, you rapidly falls into address space fragmentation problem.
The main advantage of the system I worked on wasn't really its micro-kernel architecture, but the fact that its design allowed to suppress most of glue code that is needed between a C++ program and a more classic system.
In my opinion, micro-kernel architecture has the same advantage and drawbacks that so-called "object-oriented" programing scheme : it is somewhat intellectually seducive for presentations but it is just a tool.
It would certainly be intersting for Linux to provide the dynamic link management specificities of a micro-kernel system, for instance to allow someone to quickly modify IP stack for its own purpose, but should the whole system being design that way ? I am not sure.
If you want to have an idea of the problem encountered with programing for these systems, one can look at the history of the AmigaOS, which have a design very close to a micro-kernel one.
This really wouldn't be an argument (Score:2, Funny)
If the Linux kernel had have been coded using Forth.
Just saying.
Re:This really wouldn't be an argument (Score:2)
in other news (Score:4, Insightful)
Re:in other news (Score:3, Interesting)
We are curently doing some work on the sys_open function in hte linux kernel. When we screw it up, it takes so long for a machine to boot, it looks like it has frozen. This is because the loader often tries to find the correct location for a library via brute force: "OK, I'll try
Entire comment (Score:5, Insightful)
Name: Linus Torvalds (torvalds AT osdl.org) 5/9/06
___________________
_Arthur (Arthur_ AT sympatico.ca) on 5/9/06 wrote:
I found that distinction between microkernels and "monolithic" kernels useful: With microkernels, when you call a system service, a "message" is generated to be handled by the kernel *task*, to be dispatched to the proper handler (task). There is likely to be at least 2 levels of task-switching (and ring-level switching) in a microkernel call.
___________________
I don't think you should focus on implementation details.
For example, the task-switching could be basically hidden by hardware, and a "ukernel task switch" is not necessarily the same as a traditional task switch, because you may have things - hardware or software conventions - that basically might turn it into something that acts more like a normal subroutine call.
To make a stupid analogy: a function call is certainly "more expensive" than a straight jump (because the function call implies the setup for returning, and the return itself). But you can optimize certain function calls into plain jumps - and it's such a common optimization that it has a name of its own ("tailcall conversion").
In a similar manner, those task switches for the system call have very specific semantics, so it's possible to do them as less than "real" task-switches.
So I wouldn't focus on them, since they aren't necessarily even the biggest performance problem of an ukernel.
The real issue, and it's really fundamental, is the issue of sharing address spaces. Nothing else really matters. Everything else ends up flowing from that fundamental question: do you share the address space with the caller, or put in slightly different terms: can the callee look at and change the callers state as if it were its own (and the other way around)?
Even for a monolithic kernel, the answer is a very emphatic no when you cross from user space into kernel space. Obviously the user space program cannot change kernel state, but it is equally true that the kernel cannot just consider user space to be equivalent to its own data structures (it might use the exact same physical instructions, but it cannot trust the user pointers, which means that in practice, they are totally different things from kernel pointers).
That's another example of where "implementation" doesn't much matter, this time in the reverse sense. When a kernel accesses user space, the actual implementation of that - depending on hw concepts and implementation - may be exactly the same as when it accesses its own data structures: a normal "load" or "store". But despite that identical low-level implementation, there are high-level issues that radically differ.
And that separation of "access space" is a really big deal. I say "access space", because it really is something conceptually different from "address space". The two parts may even "share" the address space (in a monolithic kernel they normally do), and that has huge advantages (no TLB issues etc), but there are issues that means that you end up having protection differences or simply semantic differences between the accesses.
(Where one common example of "semantic" difference might be that one "access space" might take a page fault, while another one is guaranteed to be pinned down - this has some really huge issues for locking around the access, and for dead-lock avoidance etc etc).
So in a traditional kernel, you usually would share the address space, but you'd have protection issues and some semantic differences that mean that the kernel and user space can't access each other freely. And that makes for some really big issues, but a traditional kernel very much tries to minimize them. And most importantly, a traditional kernel shares the access space across all the basic system calls, so that user/kernel difference is the only access space boundary.
Now, the real problem with split acce
Re:Entire comment (Score:5, Interesting)
Linux, despite being monolithic, has nice layers inside the kernel and clean interfaces too.
I think you missed Linus' point which I agreed with as well. The real thing you want out of a micro-kernel is memory protection between components of the kernel. The rest is just window dressing.
Linux does *not* have that.
Don't confuse run-time separation with interface separation. The latter is a language feature, not a system feature - you could still have a wild pointer and modify private members of any classes directly.
Let's take a look at OOP and *what* your address spaces are doing for you. Now, in a language like C++, the internal structures of an object are only partially protected. As you say, you can go ahead and cast a pointer to an object to a char * and do anything you feel like to it. The memory protection between objects is not enforced fully.
Now, if you look at Java or C#, the runtime is a virtual processor and it keeps you from violating the rules that an object defines on its data structure. The memory protection is *very* fine grained as it is on the field level rather than on the page level. You cannot (repeat cannot) go modifying the internal structures of objects if they are not marked as being accessible to you.
Having spent 10 years as a kernel developer on a day-in, day-out basis, my frame of mind when I stopped doing OS development for a living was very C based. Since then I've spent a lot of time doing OO development and I think that I've broadened my horizons a bit.
When you look at the way micro-kernels are usually conceptually designed, it's from a C/Unix mind set. Separation is done on a "server" basis and the servers export API's. As you try to add more functionality to the server its API starts getting bigger and bigger and uglier and uglier. For example, you might have a file system server. Locking a file would mean adding a call to the API to lock a file. If you try to make something like a "buffer cache server" which all of the file systems could share it's going to have a nasty API and be slow to boot or it won't be able to enforce memory protection well because the conceptual memory protection is being done on a process level.
When you look at this from an OO perspective, what you see is that the objects being dealt with are "servers" and they are too large. They need to be decomposed into their functional pieces and additional objects exposed. A "buffer cache server" would hand out "buffer objects" which had a memory protection level, locks, etc. built in.
Building a kernel that run inside of some protected runtime environment similar to the JVM would enable you to do this. If it were popular enough the features needed to make it really fast would get moved down into the hardware. As it is, I think that the speed of the kernel is kind of a red herring. In general the kernel needs to do fast I/O and fast switching between user tasks and for any other functions the speed probably doesn't matter much. When I was doing kernel development on supercomputers, most supercomputer kernels were single-threaded, even though the machines were multi-processor and things still ran pretty quickly. That's because most supercomputer apps spent very little time in the kernel. I believe this is true for most desktop apps as well. Business and "server" apps tend to spend more time in the kernel, but mostly because they are doing lots of small I/O's.
Unfortunately there's not a lot of room for innovation in the OS arena so we may never see what could be done. That's one of the reasons why I got out of OS development.
Tanenbaum on microkernels (Score:5, Informative)
This does not mean you have to agree with the guy.
http://www.computer.org/portal/site/computer/menu
http://vig.prenhall.com/catalog/academic/product/
I would love to hear about an example (Score:2)
I'd love to hear some examples. It would be nice to see an example where something that's easy in a monolithic kernel is difficult in a microkernel. See, I imagine this bunch of distinct services all happily calling each other for what they need; I'd like to know more about how the need for complex distributed algorithms arises.
Linus vs. Tanenbaum - "Linux is obsolete", Jan1992 (Score:3, Informative)
Linus vs. Tanenbaum - "Linux is obsolete" Jan,1992 [fluidsignal.com]
(Save your mod point for someone who really need them thanks!)
The real point: keep OS designers honest (Score:5, Insightful)
Andy likes microkernels because they force you to do that. Time spent on design leads to insight, which may well point to better and cleaner ways to do the task you originally set out to acomplish.
Linus hates microkernels because they force you to do that. Time spent on design is time lost getting working code out the door, and working code will give you experience that will point to better and cleaner ways to do the task you originally set out to acomplish.
Re:The real point: keep OS designers honest (Score:3, Insightful)
And working code will prevent you in most cases from ever revsiting the design.
The problem of sharing must be solved at CPU level (Score:5, Interesting)
One solution to the problem is to use memory maps. Right now each process has its own address space, and that creates lots of problems. It would have been much better if each module had its own memory map, ala virtual memory, so as that the degree of sharing was defined by the O/S. Two modules could then see each other as if they belong to the same address space, but other modules would be inaccessible. In other words, each module should have its own unique view of the memory.
Of course the above is hard to implement, so there is another solution: the ring protection scheme of 80x86 should move down to paging level. Each page shall have its own ring number for read, write, and execute access. Code in page A could access code/data in page B only if the ring number of A is less than or equal to the ring number of B. That's a very easy to implement solution that would greatly enhance modularity of operating systems.
A third solution is to provide implicit segmentation. Right now 80x86 has an explicit segmentation model that forces inter-segment addresses to be 48 bits wide on 32-bit machines (32 bits for the target address and 16 bits for the segment id). The implicit segmentation model is to use a 32-bit flat addressing mode but load the segment from a table indexed by the destination address, as it is done with virtual memory. Each segment shall have a base address and a limit, as it is right now. If a 32-bit address falls within the current segment, then the instruction is executed, otherwise a new segment is loaded from the address and a security check is performed. This is also a very easy to implement solution that would provide better modularization of code without the problems associated with monolithic kernels.
There are various technical solutions that can be supported at CPU level that are not very complex and do not impose a big performance hit. These solutions must be adopted by CPU manufacturers if software is to be improved.
Monoliths are eminently marketable (Score:3, Funny)
it's just not that complicated (Score:3, Interesting)
C-based monolithic kernels like Linux and UNIX run into software engineering problems--it gets harder and harder to ensure stability and robustness as the code mushrooms because there is no fault isolation.
The solution? Simple: get the best of both worlds through language-supported fault isolation (this can even be a pure compile-time mechanism, with no runtime overhead). It's not rocket science, it's been done many times before. You get all the fault isolation of microkernels and still everything can access anything else when it needs to, as long as the programmer just states what he is doing clearly.
C-based monolithic kernels were a detour caused by the UNIX operating system, an accident of history. UNIX has contributed enormously to information technology, but its choice of C as the programming language has been more a curse than a blessing.
The Thing Is (Score:3, Informative)
Monolithic, Linux/Netware-style modular and so-called hybrid kernels get around this limitation by moving things to the other side of the bottleneck. It makes sense on this basis to put a hardware driver in kernel space. You usually only pass "idealised" data to a driver; the driver generally has to pass a lot more to the device because it isn't ideal. For example, when talking to a filesystem driver, you generally only want to send it the data to stick into some file. The filesystem driver has to do all the donkey work of shunting the heads back and forth and waiting for the right spot of disc to pass under them.
It might be "beautiful" to have as little code as possible situated on one side of the division, but it's most practical to have as little data as possible having to travel through the division.
Loaded terms in this debate. (Score:3, Interesting)
Whenever this issue comes up, I swear to myself that proponents of microkernel architectures created the term which they use to address their opponent. The terms used to discuss this are heavily loaded. “Microkernel” sounds lean, quick, and simple, while by subjective contrast “monolithic” sounds bulky, old, and unwieldy. I think that when engaging in this debate, it is best that we prefer to at least use “unified kernel” in place of “monolithic”, being it is more accurate and contrasts with “microkernel” objectively. The term most people use for kernels like Linux and NT seems to imply that there is no logical separation of components and that all pieces are somehow a gigantic (dare I say monolithic) glob and that is nonsense.
My take (Score:3, Insightful)
IBM is shipping it. Novell, RedHat, WindRiver, LinuxWorks, Motorola, Sharp, Sony, Hell even I'm shipping it in embedded products. It is easy to "prove it works" as alluded to in another post.
Microkernels are also shipping from QNX and, uh and, oh I'm sure there are a few more. (Not knocking QNX, considered it but tossed it, due to cost and liscensing.)
Is one more secure or stable than another, is really the wrong question.
The question is really is the "System" designed with microkernel more or less stable or secure or functional then the alternative.
I think it has, to my satisfaction, been settled. From RevolutionOS the movie (BuyIt!) Stallman is asked why HURD is so far behind Linux. His answer, (paraphrased, sorry RMS) Turns out a microkernel is very difficult to pull off because of the constant stream of messages required for the simplest of tasks. This forced ovehead only makes the Kernel more secure not the system, if the "drivers" keep crashing out and restarting you could go months without noticing critical flaws. "But the kernel is rock solid" doesn't really help if I can't ship the "System", does it. The only evidence you need, is the development pace of Hurd or even QNX to show this.
I respect the professor and his work, but it was an inspiration for a much more scalable design that clearly is superior for the rapid development a modern OS is expected to have.
As an engineer I see the beauty, but as a Production Engineer I can also see the added complexity a microkernel brings.
Of course you could, argue theoretically that I'm wrong or prove it by making a GNU/Minix distribution to compete in the real world with Linux. Almost 15yrs and a flood of Students haven't helped Professor T, produce it yet. Admittedly not his goal, but come on I know students and CompSci students have a knack for carrying their favorite teacher/classes with them throughout their career and it shows up in their projects.
Re:Obvious (Score:2, Offtopic)
http://en.wikipedia.org/wiki/Linux/ [wikipedia.org]
Re:Obvious (Score:2)
Re:Obvious (Score:5, Insightful)
You are forgiven for being wrong, but not for spouting off nonsense despite knowing that you don't know what you're talking about, apparently applying the principal "if my argument involves M$ doing the wrong thing, it must be right".
While neither NT nor Mac OS X are true microkernels, the architecture of both is strongly inspired by microkernel ideas. Like Linus, the developers of these kernels recognized the practical difficulties involved in making full-on microkernels work, but unlike Linus, instead of throwing in the towel completely and doing full-on monolithic kernels, they created cleanly seperated layers interacting via well-defined interfaces whenever they practically could.
If you talk to kernel programmers, most will express a high degree of respect for the NT kernel, which is based on the DEC VMS kernel. It mostly the poor design of systems that sit on top of the kernel that has earned Windows its reputation.
Re:Obvious (Score:4, Funny)
What exactly does "inspired" mean in this case? I am "inspired" by John Holmes but that doesn't mean I have a 12" cock does it?
If you talk to kernel programmers, most will express a high degree of respect for the NT kernel, which is based on the DEC VMS kernel. It mostly the poor design of systems that sit on top of the kernel that has earned Windows its reputation.
So, did VMS have a graphics subsystem in the kernel as well? Also can you provide some examples of kernel experts praising the NT kernel for its microkernel properties? Thanks in advance.
Re:Obvious (Score:3, Informative)
Also can you provide some examples of kernel experts praising the NT kernel for its microkernel properties?
W2K does not have a pure microkernel architecture but what Microsoft refers to as a modified microkernel architecture. As with a pure microkernel architecture, W2K is highly modular. Each system function is managed by just one component of the operating system. The rest of the operating system and all applications access that function through the responsible component using a standard interface. Ke
Re:Obvious (Score:3, Informative)
Re:Obvious (Score:3, Insightful)
So, did VMS have a graphics subsystem in the kernel as well?
No. In NT 3.1, the graphics subsystem ran in user-space. In NT 4.0, it was moved into kernel-mode to avoid the performance hit of the context switch. As this history suggests, the actual architectures of the executive and the graphics subsystem are not tightly coupled. They share an address space for performance reasons, not in order to share state. (To be clear, I don't think this is a great thing, and it is a violation of microkernel principa
Java and microkernels (Score:4, Insightful)
That's interesting because those are exactly my thoughts every time I hear the arguments people use to defend microkernels: Java is to microkernels as C/C++ is to monolithic kernels.
Linus Torvalds summed it well when he mentioned that microkernels are simpler only when data flow goes in one direction only. It's very hard to get a function to fill a complicated data structure for you if you cannot work with pointers. Passing a reference will do only for simple structures, it will not work if there are structures within structures, it is very hard to do if the called function must itself pass some subset of that structure to another function. And for operating systems where one must contend with multiple access and locking, it's almost impossible to do without a performance penalty.
Let's face it: pointer manipulation is necessary because there are real life problems that are more complex than textbook examples. If there weren't, inventing the C language wouldn't have been necessary, we could have stuck with Fortran all along.
Re:Just do it! (Score:2)