Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Education Software Linux

Anatomy of the Linux Kernel 104

LinucksGirl writes "The Linux kernel is the core of a large and complex operating system, and while it's huge, it is well organized in terms of subsystems and layers. In this article, the reader explores the general structure of the Linux kernel and gets to know its major subsystems and core interfaces. 'When discussing architecture of a large and complex system, you can view the system from many perspectives. One goal of an architectural decomposition is to provide a way to better understand the source, and that's what we'll do here. The Linux kernel implements a number of important architectural attributes. At a high level, and at lower levels, the kernel is layered into a number of distinct subsystems. Linux can also be considered monolithic because it lumps all of the basic services into the kernel. This differs from a microkernel architecture where the kernel provides basic services such as communication, I/O, and memory and process management, and more specific services are plugged in to the microkernel layer.'"
This discussion has been archived. No new comments can be posted.

Anatomy of the Linux Kernel

Comments Filter:
  • I call BS (Score:4, Funny)

    by Timesprout ( 579035 ) on Saturday June 09, 2007 @02:40AM (#19448923)
    I posted the question "What is the Linux kernel" to Ask a Ninja on YouTube. He told me it was a secret project undertaken by tree squirrels to create a time machine from the kernels of nuts so they could fast forward through winter cos they are fed up being stuck indoors for the winter months.
    • Re: (Score:3, Funny)

      I posted the question "What is the Linux kernel" to Ask a Ninja on YouTube. He told me it was a secret project undertaken by tree squirrels to create a time machine from the kernels of nuts so they could fast forward through winter cos they are fed up being stuck indoors for the winter months.

      As opposed to... say... rock squirrels.
    • I *so* want a link!
  • The graph hints that 2.6.0 is the last major release, but isn't the scheme 2.6.x.y where x is major and y is minor nowadays?
    • Re:2.6.x.x (Score:5, Interesting)

      by 4e617474 ( 945414 ) on Saturday June 09, 2007 @05:04AM (#19449387)
      The major version number in 2.6.x.x is 2. Six is the minor version number, for which the term "series" is frequently used. The third number is called the "release" number, and the fourth is called "trivial" (although sometimes the difference is "the X server doesn't crash every ten seconds any more" and I don't personally consider it trivial). They stopped using the minor version/series number to denote stability vs. development in 2.6, and did away with having a stable vs. development branch altogether, but Adrian Branch has been maintaining 2.6.16.x since December of 2005 to serve the old role - a stable feature-set with the relevant bugfixes from newer releases applied.
  • by smartbei ( 1112351 ) <smartbeiNO@SPAMgmail.com> on Saturday June 09, 2007 @03:05AM (#19449003) Homepage
    From TFA (emphasis mine): "Linux quickly evolved from a single-person project to a world-wide development project involving thousands of developers. One of the most important decisions for Linux was its adoption of the GNU General Public License (GPL). Under the GPL, the Linux kernel was protected from commercial exploitation, and it also benefited from the user-space development of the GNU project (of Richard Stallman, whose source dwarfs that of the Linux kernel). This allowed useful applications such as the GNU Compiler Collection (GCC) and various shell support." Tivo?
  • by cerberusss ( 660701 ) on Saturday June 09, 2007 @03:13AM (#19449029) Journal

    Linux can also be considered monolithic because it lumps all of the basic services into the kernel. This differs from a microkernel
    Whenever I hear the word microkernel, I reach for my revolver.
    • Re: (Score:2, Funny)

      by Anonymous Coward
      Don't Hurd anybody.
    • Whenever I hear the word microkernel, I reach for my revolver.

      Too late, my friend, too late. I already have mine (a very modular one, of course) pointed at your neck.

    • Microkernels should die.

      With type safe languages [wikipedia.org], they really should be compiling all the components into the same memory space and then they would be able to use the programming language to create the API between components instead of IPC.

      Which would basically be Linux written in a type safe language. I can't find it, but IIRC, Linus commented once that Linux is like a microkernel with how the components of the system are separated, but they're using C function calls to initiate inter-communication and

      • Ah, but type-safe languages suck for writing operating-system kernels. You want to be able to perform type-unsafe pointer operations when you have to create page tables and the like.
      • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Saturday June 09, 2007 @05:46PM (#19453651) Journal
        I don't disagree, in theory. In practice, there are at least a few things that microkernels can do that monolithic kernels can't (yet).

        For example, a microkernel can run filesystems and probably even certain kinds of drivers on a per-user basis. Give them access to the chunk of hardware they need, but don't require them to be able to pwn the rest of the system. This would be really nice for binary drivers, too -- it not only kills the licensing issues, but it allows us to, for example, run nvidia drivers without trusting nvidia with any hardware beyond my video card.

        Microkernels can also allow drivers to be restarted and upgraded easily, without having to upgrade the entire system. It would be theoretically possible to upgrade the nvidia drivers in-place, or even recover from an nvidia driver crash without having to reboot the whole system. If it was designed intelligently, you might not even have to restart X to reload a driver.

        Monolithic kernels can do this in theory. In practice, "kernel modules" are ugly and hackish, often requiring you compile a kernel module for a specific kernel version, and too often being held "in use" by something. Also, they just sit there eating up RAM (though a small amount, I admit) when not being used, yet often you want them loaded anyway -- for example, loop is completely useless except for the 1% of the time I want to mount a disk image, but if loop isn't loaded, mount won't load it.

        One of the major selling points of a Unix system in the first place, at least to me, was that a single rogue program is unlikely to bring the entire system down with it. Sure, forkbombs still work, and eating tons of memory still sucks, although you can prevent both of them -- but you don't often have the situation you see on Windows (especially 95/98), where a single badly-written program can make the whole system unstable.

        This is true on microkernels, only more so. In order to replace them entirely with a monolithic kernel, you need a bit more than type-safety -- you need a language that is restrictive enough that you can actually run multiple, untrusted programs in the same address space. If you can do that, you don't even need userspace processes to have a private address space, except for some backwards-compatible POSIX layer.

        But then you're stuck with the tricky problem of making that kind of language useful for the low-level stuff a kernel has to do. Like it or not, I don't think we're at a point where you can make a viable kernel in LISP or Erlang, though Java might be close (when extended with C, which kind of kills the point).

        I would love to develop such a system, but I am about to be gone for two weeks, and when I get back, I still won't have a ton of time.

        And I think you're probably wrong about microkernels using shared memory "destroying the separation advantage" -- I'm guessing it's probably done in a fairly safe/quick way. (Or if it's not, it can be.) Consider a pipe between two processes -- what you write to stdout would, in a naive implementation, be copied to the stdin of the other process. A smarter implementation might actually allow you to simply pass ownership of that particular buffer to the other process -- functionally about the same, but almost as fast as shared memory.

        But if I have no clue what I'm talking about, please tell me. I don't want to sound like a moron.
        • The only thing you've done wrong is not mention capability security [wikipedia.org]. It's an interesting technology which parallels microkernels in a lot of ways--both were ruined by inefficient initial implementations.
          • If I understand that, I actually have an idea for a much simpler model -- at least in its primitives. You could use this model to build POSIX, or capability security, or UAC, or whatever you want, all while sharing one address space through your entire program.

            Unfortunately, to perform at all well, it demands a compiler/VM with optimizations I haven't seen tried anywhere before.
        • you need a language that is restrictive enough that you can actually run multiple, untrusted programs in the same address space

          Trust is an action. By executing a program at all, you are trusting that program. I don't mean to be a stickler here, but this is a HUGE distinction that most everyone I know or have met confuses.

          Trustworthy is an adjective, meaning that the trust placed on an object is valid.

          Perhaps a better way of saying what you said:

          ... you need a language that is restrictive enough

          • Trust is an action. By executing a program at all, you are trusting that program.

            Fine. But that's not the point.

            The point is to be able to run multiple, untrustworthy and (relatively) untrusted programs in the same address space.

            Or, in other words, being able to sandbox programs that are not trustworthy so that they are also not entrusted with any ability that we don't want them to have.

            At a very basic level, think JavaScript, or Java applets. Those are generally run in the same process as the web brows

        • > If it was designed intelligently

          FOSS isn't intelligently designed. It evolves useful characteristics, albeit through a selection process which is asexual.

          (it's carried out by us, remember?)
    • Re: (Score:1, Funny)

      by Anonymous Coward
      How can you be channeling ESR like that? The man's not dead, only his popularity is!
  • by belmolis ( 702863 ) <billposer@@@alum...mit...edu> on Saturday June 09, 2007 @03:23AM (#19449063) Homepage

    The diagrams are nice and for the most part the text is okay, but there is one glaring error that should have been edited out before this was published:

    There is also the GNU C Library (glibc). This provides the system call interface that connects to the kernel and provides the mechanism to transition between the user-space application and the kernel.

    This is false and could be very confusing for readers who don't already know about the structure of Linux. The diagram gets it right.

    • by ettlz ( 639203 )

      This is false and could be very confusing for readers who don't already know about the structure of Linux.
      Yes, it's as if a billion pure assembler programmers cried out in terror... aren't glibc system calls wrappers for SYSENTER instructions on modern Intel processors?
      • Where do you get the idea that glibc contains system calls? glibc contains things like trig functions and regular expression matching, which are not system calls.

    • wrong! (Score:5, Informative)

      by Anonymous Coward on Saturday June 09, 2007 @05:35AM (#19449453)

      The diagrams are nice and for the most part the text is okay, but there is one glaring error that should have been edited out before this was published:

      There is also the GNU C Library (glibc). This provides the system call interface that connects to the kernel and provides the mechanism to transition between the user-space application and the kernel.

      This is false and could be very confusing for readers who don't already know about the structure of Linux. The diagram gets it right.

      Yes, the diagram gets it right, but the text is essentially correct (unless you insist on being overly pendantic). glibc provides a portable system call interface via functions such as read(), write(), and open(). These functions provide the mechanism to transition between the user-space application and the kernel, ie, they invoke sysenter and sysexit (or int 0x80) on x86 cpus.

      Unless you're so l33t that you invoke all your system calls with inline assembly?
      • Re: (Score:3, Interesting)

        by spitzak ( 4019 )
        Unless you're so l33t that you invoke all your system calls with inline assembly?

        Actually there was a time when read() and write() were usually turned by the compiler into assembler instructions to trap straight to the kernel. "libc" was considered where stdio (fread/fwrite and printf, etc) lived, not where read() and write() did.
        • Re: (Score:1, Interesting)

          by Anonymous Coward
          And even more actually, many Linux system calls don't look anything like the POSIX-style API exported by glibc. This was a conscious decisions. Calls like fork() are now implemented in terms of the Linux system call clone(), rather than using an actual fork() system call (which is still available for ABI compatibility, but that's all it's there for). This is all possible because the implementation of the POSIX API is in glibc, not the kernel.
      • Re:wrong! (Score:5, Funny)

        by cyber-vandal ( 148830 ) on Saturday June 09, 2007 @06:51AM (#19449723) Homepage
        Someone being overly pedantic on Slashdot? Next they'll be criticising a great company like Microsoft.
      • No, I'm not being overly pedantic. First, the great majority of functions in glibc are not by any stretch of the imagination system calls. Functions dealing with character handling, strings and arrays, character encodings, locales, searching and sorting, pattern matching, trignometry, random numbers, and cryptography, for example, are not system calls. Second, yes, glibc has been expanded, in its capacity as a portability library, to include functions like low-level i/o that are normally system calls. The

  • Intriguing (Score:2, Insightful)

    by peterjb31 ( 1108781 )
    Interesting article but really quite vague.
    • Indeed.

      I was hoping for some insight into the interesting interfaces. What I saw was a description that seems to apply equally well to anything UNIXy post BSD 4.3. (except for the fawning over the GNU stuff, of course.) Disappointing.

  • by Z00L00K ( 682162 ) on Saturday June 09, 2007 @03:34AM (#19449115) Homepage Journal
    when it comes to the inner workings of an operating system can be found at the Multicians [multicians.org] web site.

    Interesting to read about events from a bygone era.

    • by jd ( 1658 )
      There's a PL/I compiler that works with GCC that's open-source and free. (I'm adding that because I've found at least one commercial PL/I compiler for Linux, which costs $15,000/seat.) If we had enough of the Multics OS code, it should be possible to replace the hardware-specific calls to the Linux equivalent and then compile it as a wrapper.
  • wow (Score:3, Funny)

    by kitsunewarlock ( 971818 ) on Saturday June 09, 2007 @03:35AM (#19449119) Journal
    5 million lines of code? Are they allowed to show all that.

    That's hot. /insert another "anatomy" joke here.
    • by dbIII ( 701233 )

      5 million lines of code? Are they allowed to show all that.

      What's more - when you print it out in legible 12 point print on ordinary office paper those five million lines of code can fit in a single briefcase carried in one hand by an SCO employee. I really would not want to arm wrestle that guy.

    • Where are the naughty bits?
  • Its awesome that the article is being hosted at IBM.. Perhaps more people will start to shift away from windows and more titles will be available. i think dell is also shipping with linux, yes?
    • IBM hosts *many* technical articles on just about everything.

      Most Linux users do not evangelize Linux and would prefer that the unwashed masses stick with Windows.

      What do you mean by titles? More distros? More OSS programs in general?

      The fact that Dell is now shipping a couple of machines with Ubuntu (if the customer chooses) is completely irrelevant.
    • Its awesome that the article is being hosted at IBM.

      This is definitely not the IBM that I loved (not) in the 80's...

  • by daxomatic ( 772926 ) on Saturday June 09, 2007 @06:48AM (#19449713) Journal
    Here is a great tool to visualize your kernel and then print a poster of it
    http://fcgp.sourceforge.net/ [sourceforge.net]
    you do need a big printer for it if you want a readable poster though ;-).( and yes you also can stick 16 A4's to a wooden plate but that's just plain silly)
    I does however show you in perfect details all the arch's and sub systems of the kernel.
    • by creinig ( 18837 )

      Here is a great tool to visualize your kernel and then print a poster of it http://fcgp.sourceforge.net/ [sourceforge.net]

      I'm one of the maintainers of this thing -- and I have to say that it's effectively not maintained these days. So don't be too surprised if it doesn't completely work ;)

      you do need a big printer for it if you want a readable poster though ;-).( and yes you also can stick 16 A4's to a wooden plate but that's just plain silly)

      Well, 16xA4 will be a bit small (~0.8x1.1m). I have a printout of a 2.4 k

  • by Toffins ( 1069136 ) on Saturday June 09, 2007 @07:15AM (#19449819)
    Here is a brief history of the average size of each series of the Linux kernels (this is for the core kernel not including module sizes) that I have configured with very approximately the same set of features over the last 14 years from 1993 to 2007:

    In 1993 a Linux 0.99 kernel zImage weighed in at 210 kBytes
    In 1994 a Linux 1.0.x kernel zImage weighed in at 230 kBytes
    In 1995 a Linux 1.2.x kernel zImage weighed in at 310 kBytes
    In 1996 a Linux 2.0.x kernel zImage weighed in at 380 kBytes
    In 1998 a Linux 2.2.x kernel bzImage weighed in at 650 kBytes
    In 2000 a Linux 2.4.x kernel bzImage weighed in at 1000 kBytes
    In 2003 a Linux 2.6.x kernel bzImage weighed in at 1300 kBytes
    In 2007 a Linux 2.6.x kernel bzImage weighed in at 1500 kBytes
    Remember the days when a kernel and root filesystem would comfortably fit on a 1.4MB floppy?
    Hush little kernel, nobody said you are getting a little f-a-t!
    • Re: (Score:2, Informative)

      by g2devi ( 898503 )
      Keep in mind that the bulk of that size is related to support for every device driver under the Sun (and x86, and PowerPC, and IBM mainframe, and iPod, ...). Most people don't compile in all the options, so it is significantly smaller. And although it's possible to get a bare-bones kernel that support exactly the devices your computer supports (i.e. embedded Linux systems do this), most people are willing to trade a little bulk for the convenience of having any computer they install on or device they plug i
      • Re-read the post you're responding to. They're not compiling every device driver under the sun into their kernel.
    • Hush little kernel, nobody said you are getting a little f-a-t!

      It's not that bad. Much of the size of a Linux kernel is all of the support for the plethora of devices it supports. Sure, much of that code is build as modules, but some isn't, and even the modules have to have hooks built into the kernel binary.

      As an experiment, I just built a stripped-to-the-bone version of 2.6.21 (from Debian's kernel sources), and it's 799 KB. Still rather large for those of us who started on machines that had only a fraction of that much RAM, but certainly not a problem in the

    • Re: (Score:3, Insightful)

      Over that same period, the RAM, processor cycles, and HD space sitting on my desk all increased by a factor of about 2000. So I'd say a five-fold increase in kernel size isn't too bad -- and a hell of a lot better than most software has done.
    • Remember the days when a kernel and root filesystem HAD to comfortably fit on a 1.4 MB floppy?
    • Re: (Score:3, Informative)

      Funny - I can easily get a 2.6 kernel under 1000Kb. All I have to do is disable the subsystems that I don't need on my circa 1996 PC. Things like USB, SCSI, AGP, and MD add quite a bit to the kernel. In fact, with all hotplug systems disabled (and no modern systems like sysfs) it isn't hard to get a kernel down to 620kb.

      I think you've been adding a lot of features without knowing it.
    • Really the kernel hasn't been getting fat. The modules provided by the kernel team aren't assumed in compiling, that's what make menuconfig is for. However I've noticed poor optimization on the part of distro released kernels. An example I've found in ubuntu is the amount of processor ticks used. For a desktop either 300 or 1000 is recommended by the kernel, for servers either 100 or 250 would be better. They default this at 250, and this is were ubuntu leaves the setting despite the kernel recommendat
    • Your post would be pretty much irrelevant in those current days of multi-gigabyte RAM and terrbyte-sized hard disks. However, kernel size still matters for embedded developers. Many projects in the embedded market still use 2.4 (if they use Linux at all), because even with everything stripped down, 2.6 kernels tend to be considerably larger than 2.4 kernels.
    • according to Desi Rhoden, president and CEO of industry consortium Advanced Memory International and chairman of the JEDEC JC-42 memory committee. "The historical year-over-year, price-per-bit decline from 1978 -2002 is 36 percent," ( http://www.techiwarehouse.com/cms/engine.php?page_ id=bb2a94e7 [techiwarehouse.com] )

      (1/(1-0.36))^14 > 500

      The size of the kernel has grown more than 7 times over 14 years, but memory is more than 500 times cheaper.

      When measured in dollars that 1500K 2007 kernel will fit in 70 times less memor
    • Remember the days when a kernel and root filesystem would comfortably fit on a 1.4MB floppy?
      it's probably been a similar amount of time since i've even seen a 1.4MB floppy
  • 'The Linux kernel is the core of a large and complex operating system, and while it's huge, it is well organized in terms of subsystems and layers.'

    Should read:

    'The Linux kernel is a large and complex operating system, and while it's huge, it is well organized in terms of subsystems and layers.'

    Although based upon the comments from those who claim to have read the article it doesn't look like the article covers anything but compiling the Linux operating system with make menuconfig. It does apparently includ
    • Nitpicks (Score:1, Flamebait)

      by jd ( 1658 )
      Kernels are not Operating Systems and it is syntactically incorrect to place a "," before an "and", although it is usually acceptable where one list contains a second list. In this case, where the "and" applies to the whole of the remainder of the sentence and not just the insert, the comma should follow the and.
      • Kernels are by definition operating systems in the technical sense. The terms are synonymous. Incorrect usage has caused the term 'operating system' to take on a second meaning that is more or less synonymous with distribution that is easily clarified by using the complete term 'operating system distribution' to distinguish between the two. For instance, Windows XP is an operating system distribution that includes an operating system as the core functional piece that all other components of the system depen

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...