Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Education Software Linux

Anatomy of the Linux Kernel 104

LinucksGirl writes "The Linux kernel is the core of a large and complex operating system, and while it's huge, it is well organized in terms of subsystems and layers. In this article, the reader explores the general structure of the Linux kernel and gets to know its major subsystems and core interfaces. 'When discussing architecture of a large and complex system, you can view the system from many perspectives. One goal of an architectural decomposition is to provide a way to better understand the source, and that's what we'll do here. The Linux kernel implements a number of important architectural attributes. At a high level, and at lower levels, the kernel is layered into a number of distinct subsystems. Linux can also be considered monolithic because it lumps all of the basic services into the kernel. This differs from a microkernel architecture where the kernel provides basic services such as communication, I/O, and memory and process management, and more specific services are plugged in to the microkernel layer.'"
This discussion has been archived. No new comments can be posted.

Anatomy of the Linux Kernel

Comments Filter:
  • Re:2.6.x.x (Score:5, Interesting)

    by 4e617474 ( 945414 ) on Saturday June 09, 2007 @06:04AM (#19449387)
    The major version number in 2.6.x.x is 2. Six is the minor version number, for which the term "series" is frequently used. The third number is called the "release" number, and the fourth is called "trivial" (although sometimes the difference is "the X server doesn't crash every ten seconds any more" and I don't personally consider it trivial). They stopped using the minor version/series number to denote stability vs. development in 2.6, and did away with having a stable vs. development branch altogether, but Adrian Branch has been maintaining 2.6.16.x since December of 2005 to serve the old role - a stable feature-set with the relevant bugfixes from newer releases applied.
  • Re:wrong! (Score:3, Interesting)

    by spitzak ( 4019 ) on Saturday June 09, 2007 @07:38AM (#19449653) Homepage
    Unless you're so l33t that you invoke all your system calls with inline assembly?

    Actually there was a time when read() and write() were usually turned by the compiler into assembler instructions to trap straight to the kernel. "libc" was considered where stdio (fread/fwrite and printf, etc) lived, not where read() and write() did.
  • by daxomatic ( 772926 ) on Saturday June 09, 2007 @07:48AM (#19449713) Journal
    Here is a great tool to visualize your kernel and then print a poster of it []
    you do need a big printer for it if you want a readable poster though ;-).( and yes you also can stick 16 A4's to a wooden plate but that's just plain silly)
    I does however show you in perfect details all the arch's and sub systems of the kernel.
  • Re:wrong! (Score:1, Interesting)

    by Anonymous Coward on Saturday June 09, 2007 @06:06PM (#19453359)
    And even more actually, many Linux system calls don't look anything like the POSIX-style API exported by glibc. This was a conscious decisions. Calls like fork() are now implemented in terms of the Linux system call clone(), rather than using an actual fork() system call (which is still available for ABI compatibility, but that's all it's there for). This is all possible because the implementation of the POSIX API is in glibc, not the kernel.
  • by SanityInAnarchy ( 655584 ) <> on Saturday June 09, 2007 @06:46PM (#19453651) Journal
    I don't disagree, in theory. In practice, there are at least a few things that microkernels can do that monolithic kernels can't (yet).

    For example, a microkernel can run filesystems and probably even certain kinds of drivers on a per-user basis. Give them access to the chunk of hardware they need, but don't require them to be able to pwn the rest of the system. This would be really nice for binary drivers, too -- it not only kills the licensing issues, but it allows us to, for example, run nvidia drivers without trusting nvidia with any hardware beyond my video card.

    Microkernels can also allow drivers to be restarted and upgraded easily, without having to upgrade the entire system. It would be theoretically possible to upgrade the nvidia drivers in-place, or even recover from an nvidia driver crash without having to reboot the whole system. If it was designed intelligently, you might not even have to restart X to reload a driver.

    Monolithic kernels can do this in theory. In practice, "kernel modules" are ugly and hackish, often requiring you compile a kernel module for a specific kernel version, and too often being held "in use" by something. Also, they just sit there eating up RAM (though a small amount, I admit) when not being used, yet often you want them loaded anyway -- for example, loop is completely useless except for the 1% of the time I want to mount a disk image, but if loop isn't loaded, mount won't load it.

    One of the major selling points of a Unix system in the first place, at least to me, was that a single rogue program is unlikely to bring the entire system down with it. Sure, forkbombs still work, and eating tons of memory still sucks, although you can prevent both of them -- but you don't often have the situation you see on Windows (especially 95/98), where a single badly-written program can make the whole system unstable.

    This is true on microkernels, only more so. In order to replace them entirely with a monolithic kernel, you need a bit more than type-safety -- you need a language that is restrictive enough that you can actually run multiple, untrusted programs in the same address space. If you can do that, you don't even need userspace processes to have a private address space, except for some backwards-compatible POSIX layer.

    But then you're stuck with the tricky problem of making that kind of language useful for the low-level stuff a kernel has to do. Like it or not, I don't think we're at a point where you can make a viable kernel in LISP or Erlang, though Java might be close (when extended with C, which kind of kills the point).

    I would love to develop such a system, but I am about to be gone for two weeks, and when I get back, I still won't have a ton of time.

    And I think you're probably wrong about microkernels using shared memory "destroying the separation advantage" -- I'm guessing it's probably done in a fairly safe/quick way. (Or if it's not, it can be.) Consider a pipe between two processes -- what you write to stdout would, in a naive implementation, be copied to the stdin of the other process. A smarter implementation might actually allow you to simply pass ownership of that particular buffer to the other process -- functionally about the same, but almost as fast as shared memory.

    But if I have no clue what I'm talking about, please tell me. I don't want to sound like a moron.

Thus spake the master programmer: "Time for you to leave." -- Geoffrey James, "The Tao of Programming"