Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Jens Axboe On Kernel Development 68

BlockHead writes "Kerneltrap.org is running an interview with Jens Axboe, 15 year Linux veteran and the maintainer of the linux kernel block layer, 'the piece of software that sits between the block device drivers (managing your hard drives, cdroms, etc) and the file systems.' The interview examines what's involved in maintaining this complex portion of the Linux kernel, and offers an accessible explanation of how IO schedulers work. Jens details his own CFQ, or Complete Fair Queue scheduler which is the default Linux IO scheduler. Finally, the article examines the current state of Linux kernel development, how it's changed over the years, and what's in store for the future."
This discussion has been archived. No new comments can be posted.

Jens Axboe On Kernel Development

Comments Filter:
  • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Wednesday January 31, 2007 @01:04PM (#17830042) Homepage Journal

    FreeBSD dispensed with them altogether years ago...

    Character devices only, thank you very much.

    *Duck*

    • by Kadin2048 ( 468275 ) <slashdot@kadin.xoxy@net> on Wednesday January 31, 2007 @02:34PM (#17831220) Homepage Journal
      So how does that work?

      At risk of starting a holy war, is there any reason why one approach would be superior? And do they lend themselves to different methods of scheduling? In TFA, Axboe talks about [1] the scheduling mechanism used in later versions of the 2.6 kernel series, which alleviates a problem that I (and most other people, probably) have run into before.

      I'm curious, because although I don't use any of the 'real' BSDs very often -- I spend most of my time (at home, anyway) using either Mac OS X, which uses the Mach/XNU kernel (which is derived from 4.3BSD, although I don't know if the I/O scheduler has been rewritten since then), or Linux with the 2.6 kernel, and it seems to me that OS X's disk I/O leaves something to be desired compared to Linux's.

      Does BSD handle I/O differently in some fundamental fashion than Linux? It sounds like, by eliminating block devices, that they basically remove the kernel from doing any re-ordering or caching of data, which makes things "safer" (in the event of a crash) but seems like it would have big performance penalties when using drives that aren't very smart, and don't do a lot of caching and optimization on their own. It seems like getting rid of I/O scheduling altogether is a stiff price to pay for "safety."

      [1] (quoting because there doesn't seem to be anchors in TFA)

      Classic work conserving IO schedulers tend to perform really poorly for shared workloads. A good example of that is trying to edit a file while some other process(es) are doing write back of dirty data. ... Even with a fairly small latency of a few seconds between each read, getting at the file you wish to edit can take tens of seconds. On an unloaded system, the same operation would take perhaps 100 milliseconds at most. By allowing a process priority access to the disk for small slices of time, that same operation will often complete in a few hundred milliseconds instead. A different example is having more two or more processes reading file data. A work conserving scheduler will seek back and forth between the processes continually, reducing a sequential workload to a completely seek bound workload. ...
      • by stsp ( 979375 )

        Does BSD handle I/O differently in some fundamental fashion than Linux? It sounds like, by eliminating block devices, that they basically remove the kernel from doing any re-ordering or caching of data, which makes things "safer" (in the event of a crash) but seems like it would have big performance penalties

        Good question.

        The FreeBSD people claim that no one is using block devices anyway (source [freebsd.org]):

        no serious applications rely on block devices, and in fact, almost all applications which access disks direc

      • Err, no (Score:3, Informative)

        by Fweeky ( 41046 )
        "It sounds like, by eliminating block devices, that they basically remove the kernel from doing any re-ordering or caching of data, which makes things "safer""

        No; FreeBSD's shifted the buffer cache away from individual devices and into the filesystem/VM, where it caches vnodes rather than raw data blocks. The IO queue (below all this block/character/GEOM stuff) is scheduled using a standard elevator algorithm [wikipedia.org] called C-LOOK. It's showing it's age in places, and there's been some effort towards replacing/im
      • Re: (Score:3, Interesting)

        by jd ( 1658 )
        Block devices lend themselves nicely to offload engines, as you can RDMA the processed data into a thin driver that basically just offers the data to the userspace application in the expected format but does little or no actual work. You can even do direct data placement into the application and just use the kernel as a notification system. So, the smarter the hardware, the more you can get from being able to handle large chunks of data or large numbers of commands in a single shot. Arguably, you can still
  • "the piece of software that sits between the block device drivers (managing your hard drives, cdroms, etc) and the file systems.'"

    That sounds REALLY hard. I'd be more interested if there's a development strategy he could recommend re: complex development projects.
  • I thought the title was: Ewe Boll On Kernel Development...
  • by isaac ( 2852 ) on Wednesday January 31, 2007 @01:13PM (#17830154)

    JA: In your opinion, with the increased rate of development happening on the 2.6 kernel, has it remained stable and reliable?

    Jens Axboe: I think so. With the new development model, we have essentially pushed a good part of the serious stabilization work to the distros.
    I respectfully disagree that the new development model works well from an end-user's perspective (an "end user" of many thousands of linux hosts, not a toy desktop environment). Minor point releases now contain major changes in e.g. schedulers. This makes for a lot of work for real Linux users, backporting the useful bugfixes while retaining older algorithsm for which workloads are optimized. Result: a severely splintered kernel and a lot more work for us.

    If core changes of such magnitude are no longer sufficient to merit a dev branch or even a major point release, why bother with the "2.6" designation at all? Just pull a Solaris and call the next release "Linux 20" or "Linux XX."

    -Isaac
    • by archen ( 447353 )
      This is one thing I really like about FreeBSD, and that's the fact that they aren't afraid of versions. You have a development branch, and a production branch. Changes are typically moderate until a major revision 5.x to 6.x. It's also nice that you typically have stability within a version, and often a backwards compatibility layer. For instance nVidia drivers work in FreeBSD 5x, but all that's needed for 6x is to compile an option in the kernel (there by default).

      Many such as myself are getting tired
    • by Kjella ( 173770 ) on Wednesday January 31, 2007 @01:57PM (#17830758) Homepage
      Well, on the other side distros were backporting *huge* amounts of patches from 2.5 to 2.4, so while plain vanilla 2.4 was stable, almost noone was running it. The 2.6 releases means the distros are shipping "stabilized unstables" instead of "destabilized stables", I guess that works out better for some and worse for some. Are RHEL, SLES, Debian stable kernels not good enough kernels to start out with, if stability is what you need? I feel there's quite a few things I see come which I find great that arrive in a timely fashion, not at the release of 2.8 in a few years. I think most that use a distro's kernel feel that way.

      If you're the kind of kernel hacker who liked to get yours directly from kernel.org, yes then it sucks. But IMO the kernel has grown too big for just the core devs, think of it as an "extended" kernel team including the distros, where kernel.org releases are "internal betas". I think if you cut it back and expect just kernel.org to deliver stable kernels with the resources they have (which admittingly, they used to) then kernel development will slow way down.
      • by isaac ( 2852 )

        But IMO the kernel has grown too big for just the core devs, think of it as an "extended" kernel team including the distros, where kernel.org releases are "internal betas". I think if you cut it back and expect just kernel.org to deliver stable kernels with the resources they have (which admittingly, they used to) then kernel development will slow way down.

        I live with the fragmentation and vendor lock-in that comes with distro-engineered kernels because I have to, but I don't like it. I'm just saying that

        • Then choose a kernel--a stock kernel would do best for you, most likely--and stick with it. If you really need something from a newer kernel, you can do the work for backporting, pay someone else to do it, or live without (you were before, no?).
        • by gmack ( 197796 )
          This actually reduces fragmentation since only bug fixes get back ported. I don't know why he didn't mention it but the older branches are still maintained. If you want bug fixes then get 2.6.18.5 or something and only move between versions if you want new features. The distros are sending their fixes upstream.
    • by ComputerSlicer23 ( 516509 ) on Wednesday January 31, 2007 @01:57PM (#17830766)

      Don't take this the wrong way, but your complaint sounds a lot like the story about a patient and a doctor:

      "Doctor, when I do this, it hurts", and the doctor replies, "Well don't do that".

      I mean, if you are following bleeding edge kernels, and complaining that they aren't as stable as you'd like. Why not just follow a vendors kernel? If you use or install "many thousands", you are either maintaining your own de-facto distribution or you are using someone else's distribution. Vendor's do exactly the work you want done on your behalf.

      I patiently wait for my vendor kernel, which might be 10 point releases behind integrate bug fixes and then upgrade in a year or two to a much newer point release (I think RedHat has used 2.6.9 and/or 2.9.13 in recent memory)... Incrementing a different number wouldn't really make any difference anyways. At that point it's all semantics, if you know the rules of the game, it's not hard to tell what's dangerous as an upgrade and what's not.

      It's not like 2.4.13 (or whatever one in the 2.4 series that introduced series disk corruption) was safe merely because it was a point release... They are safe because somebody took it out back and beat on the kernel for a while and it didn't cause any problems. If you upgrade without proper testing and it breaks, you get to keep the pieces.

      Kirby

    • Re: (Score:3, Insightful)

      The kernel development model is optimized to make distros happy, not end users. Just like Gnome/KDE, BTW. This is because, well, in the Real World most of desktop/servers use (or should use) the kernel shipped by their distro. And because distros are who emply most of kernel hackers.

      In other words, the previous development model made happy say 1% of people (you) and 99% unhappy (distros and hence people using distros). The current model makes 99% of people happy (distros) and 1% unhappy.

      IMO it's was a good
    • by noz ( 253073 )

      Minor point releases now contain major changes in e.g. schedulers.
      As much as this is now commonplace, I believe the virtual memory management subsystem was entirely replaced half-way through the 2.4 series. This management style has always been a concern for production users.
  • Where are they now? (Score:4, Interesting)

    by LaminatorX ( 410794 ) <(sabotage) (at) (praecantator.com)> on Wednesday January 31, 2007 @01:18PM (#17830216) Homepage
    I did a double take when I saw this, as Jens was an exchange student at my high-school way back when. Small internet.
  • An excellent read!

    Something exciting about delving in the low level logic that gives you the feeling that there's always something more to learn !

    I guess always being two steps behind is the motivation that makes it all worth while.
  • Wow ... (Score:3, Funny)

    by ravee ( 201020 ) on Wednesday January 31, 2007 @01:23PM (#17830288) Homepage Journal
    15 year Linux veteran and the maintainer of the linux kernel block layer,...

    In the interview he says he is now 30 years old. Wow that means he started working in Linux at the age of 15 - a real prodigy. A very interesting interview.

    Btw, it is nice that kerneltrap.org has finally had a make over. The earlier website design looked rather drab.
  • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Wednesday January 31, 2007 @01:29PM (#17830372) Homepage Journal

    CFQ now uses a time slice concept for disk sharing, similar to what the process scheduler does. Classic work conserving IO schedulers tend to perform really poorly for shared workloads.

    I wonder, if the originating process' priority is taken into account at all... It has always annoyed me, that the "nice" (and especially the idle-only) processes are still treated equally, when it comes to I/O...

    • The article mentions an "ionice".
      • Re: (Score:3, Insightful)

        by mi ( 197448 )

        The article mentions an "ionice".

        Indeed, it does — but should not the I/O-niceness be automatically derived from the process' niceness?

        • by MartinG ( 52587 )
          Maybe there is a case for a userland tool that sets both at once combining the nice and ionice commands into one, but they certainly should not be tied together in the kernel. The kernel is there to provide mechanisms for setting these things, not for deciding what should be linked to what.
          • by mi ( 197448 )

            nice(1) should be doing that (with the help of the kernel-provided mechanisms) then, in my not so humble opinion. Some kind of ionice can be used for finer tuning, but by default a nicer process should be nicer on everything — IO included.

            • by MartinG ( 52587 )
              I would probably agree with you there. In fact one command could easily handle all of this doing what you suggest by defailt, and having additional arguments for selecting different CPU and I/O nice values.

              I suspect nice(1) was not changed for backwards compatibility reasons. There would perhaps be corner cases where a process expected their fair share of I/O time but didn't need much CPU (e.g., tar zcf scripts for backups?) that would suffer too much or not complete if they were suddenly I/O starved.
        • You can make it behave that way if you want, but nobody forces you .
    • I wonder, if the originating process' priority is taken into account at all... It has always annoyed me, that the "nice" (and especially the idle-only) processes are still treated equally, when it comes to I/O...

      Are you sure they are? See the ionice man page [die.net] here:

      Best effort. This is the default scheduling class for any process that hasn't asked for a specific io priority. Programs inherit the CPU nice setting for io priorities.

  • by rehabdoll ( 221029 ) on Wednesday January 31, 2007 @01:36PM (#17830470) Homepage
    Anticipatory is, according to my menuconfig:

    The anticipatory I/O scheduler is the default disk scheduler. It is
    generally a good choice for most environments, but is quite large and
    complex when compared to the deadline I/O scheduler, it can also be
    slower in some cases especially some database loads.
    *

    Anticipatory is also preselected with a fresh .config
    • Re: (Score:3, Informative)

      by darkwhite ( 139802 )
      CFQ was committed relatively recently and there was discussion for a while as to whether and when to make it default. I think 2.6.19 uses Anticipatory by default, but 2.6.20 will use CFQ by default (not 100% sure though).
    • by Kadin2048 ( 468275 ) <slashdot@kadin.xoxy@net> on Wednesday January 31, 2007 @03:58PM (#17832658) Homepage Journal
      Are there any hard metrics on what the performance advantages are of various schedulers, under typical load conditions?

      Reading TFA piqued my interest into I/O scheduling and I've been doing some reading on it, and it seems like there are several competing schools of thought, of which Axboe (and potentially the Linux kernel developers generally) are only one.

      An alternative view, such as this from Justin Walker (a Darwin developer) on the darwin-kernel mailing list [apple.com], holds that it's not worthwhile for the OS kernel to do much disk scheduling, since "the OS does not have a good idea of the actual disk geometry and other performance characteristics, and so we [kernel developers] leave that level of scheduling up to the controllers in the disk drive itself. I think, for example, that recent IBM drives have some variant of OS/2 running in the controller. Since the OS knows nothing about heads, tracks, cylinders for modern commodity disks, it's futile to try to schedule I/O for them." (written Mar 2003)

      Axboe seems to acknowledge that this may sometimes be the case, because they do have the 'non-scheduling scheduler,' which he recommends only for use with very intelligent hardware. However, it seems like some people think that commodity drives are already 'smart enough' to do their own scheduling.

      It seems like determining which approach was superior would be relatively straightforward, and yet I've never seen it done (although maybe I'm just not looking in the right places). Anecdotally, I'm tempted to agree with Axboe, since it seems like, when doing things where several processes are all thrashing the disk simultaneously, my Linux machine feels faster than my OS X one, but this is by no means scientific (they don't have the same drives in them, not working with the same datasets, etc.).

      On what drives, and under what conditions, is it advantageous to have the OS kernel perform scheduling, and on which ones is it best just to pass stuff to the drive and let the controller do all the thinking?
      • by Sits ( 117492 )
        The sort of "scheduling" you are talking about sound like block reordering. This is where you try and group requests for blocks that you guess are going to be in a similar part of the disk together in the hopes of speeding things up. It's absolutely true that today's disks bear less and less resemblance to the old cylinders, sectors and heads of old disks and most disks have their own cache which can do reordering (not to mention the silent remapping that modern disks do when sectors go bad). Unless the dis
      • Re: (Score:3, Informative)

        by axboe ( 76190 )
        It depends on what you need to schedule. If your drive does queuing and only one process IO is active, then the OS can do very little to help. The OS usually has a larger depth of ios to work with, so it's still often beneficial to do some sorting at that level as well.

        IO scheduling is a lot more than that, however. If you have several active processes issuing IO, the IO scheduler can make a large difference to throughput. I actually just did a talk at LCA 2007 with some results on this, you can download th
      • An alternative view, such as this from Justin Walker (a Darwin developer) on the darwin-kernel mailing list [apple.com], holds that it's not worthwhile for the OS kernel to do much disk scheduling, since "the OS does not have a good idea of the actual disk geometry and other performance characteristics, and so we [kernel developers] leave that level of scheduling up to the controllers in the disk drive itself. I think, for example, that recent IBM drives have some variant of OS/2 running in the controller.
  • Am I the only one that misread that as "an interview with Jens Axboe, 15 year old Linux veteran" ?
  • Is this the part of the kernel that's responsible for making systems really slow during extended disk writes, while the CPU utilization is minimal?
    • I think it would be more correct to say:
      [His] is the the part of the kernel that's responsible for making systems slightly less slow during extended disk writes, while the CPU utilization is minimal.

      And even that's not quite true, where the scheduler really comes into play is when you have two or more processes trying to access the disk at the same time. During an extended, sustained read or write, the scheduler probably just needs to stay the hell out of the way and pass data as fast as it can.

      You could al
  • by chuck ( 477 ) on Wednesday January 31, 2007 @02:48PM (#17831396) Homepage
    As a native English speaker, comfortable with Spanish and aware of the basics of French (so I'm not entirely uneducated), I am entirely unequipped to reason the pronunciation of "Jens Axboe." Can someone help me out?
  • by bcmm ( 768152 ) on Wednesday January 31, 2007 @03:39PM (#17832278)
    Thank you very much. Much of this article is informative, technical and really, really nerdy. I for one sit through dupes and rubbish like today's meaningless benchmarking of differing minor kernel versions in the hope of reading articles like this.

    BTW, does anyone have a good set of benchmarks of the performance of different IO schedulers when running one or two or three IO intensive tasks, when running one intensive and many small tasks, etc.? That would actually help me decide whether to rebuild my kernel with CFQ.

    Also, ionice would have made my old machine much more usable when doing backups... Oh well.
    • Also, ionice would have made my old machine much more usable when doing backups... Oh well.

      Is it any different with your new machine? My Athlon X2 (SATA disks, 2GB etc) crawls when I start rsyncing my /home.
  • I was a little disappointed when he said filesystems like Reiser4 and ZFS don't affect the block layer. I'm not sure about ZFS, either, but I do know that Reiser4 can do stuff above and beyond what the block device layer can do, these days.
    How do I know? Why, it's on the Namesys webpage!
    • Re: (Score:2, Interesting)

      by axboe ( 76190 )
      That's largely because they do more than traditional file systems. Some of the ZFS functionality Linux would put in other layers, for instance. Once the IO is issued to the block layer, there's no difference.

Heard that the next Space Shuttle is supposed to carry several Guernsey cows? It's gonna be the herd shot 'round the world.

Working...