Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Kernel Hacker Keith Owens On kbuild 2.5, XFS, More 77

Jeremy Andrews writes: "Kerneltrap interviews Keith Owens this week, an experienced kernel hacker who has long contributed to the Linux kernel. His contributions include updating ksymoops and modutils, both of which he maintains. He also works on kbuild 2.5. Earlier, he built the original Integrated Kernel Debugging patch. He's also working on kdb and XFS. Check out the interview." Lots of good information in here about things to expect in 2.5.
This discussion has been archived. No new comments can be posted.

Kernel Hacker Keith Owens On kbuild 2.5, XFS, More

Comments Filter:
  • I just compiled 2.4.13 and reboot. Now I lost use of my keyboard. I wonder if 2.5 will do that.
    • Perhaps you chose the wrong kernel configuration, left out USB keyboard support perhaps? Don't blame the kernel for your inability to configure your system. Besides... 2.5 with be a DEVELOPMENT branch... less stable, not more so. Think before you post.
  • Going from the feel of my butt, not from benchmarks, XFS has been pretty stable on my desktop machine...however i've been experimenting with JFS, ReiserFS and currently ext3 and so far (remember no benchmarks) XFS seems the slowest of the 4 :-( Reiser or JFS seem to be the faster 2. None the less, when i get my new cisco dsl router and 24 port bay networks 24 port switch in and i turn my current router into a DNS/webcam/streaming mp3 server XFS will be my FS choice. Thank God for good coders! :-)

  • This is so cool (Score:3, Interesting)

    by Anton Anatopopov ( 529711 ) on Saturday October 27, 2001 @09:36AM (#2487112)
    I have often wanted to have a go at kernel programming. I want to try and write some device drivers, but I am always too scared of this 'black art'. Its good to see someone taking time out to make it a bit more comprehensible for 'the rest of us'.

    I'm wondering, do kernel developers use tools like vmware/plex86 to debug their running kernels ? It seems like we've come a long way since debugging with strategically placed printfs

    • First: they are strategically placed printks not printfs.
      And no they are still usefull, like Linus has said (and you know Linus is always right, don't you?) they force you to think about what you are doing before trying another run.

      Trust me nothing beets a bunch of printks and a serial console when debugging a kernel.

      Jeroen
    • Re:This is so cool (Score:3, Informative)

      by rasmus_a ( 30980 )
      > I have often wanted to have a go at kernel programming. I want to try and write some device drivers, but I am always
      >too scared of this 'black art'. Its good to see someone taking time out to make it a bit more comprehensible for 'the rest of us'.

      Try www.kernelnewbies.org. Esp. look in the book section for Linux Device Drivers v.2 and the online version.

      Wrt. debugging, prinks are still used alongside everything else. I do not think that debugging is done with vmware or plex86 yet but there is a port of the kernel to userland (User-Mode Linux) which is used by some.

      Rasmus
    • Almost... :)

      Jeff Dike and User Mode Linux [sourceforge.net] to the rescue.
    • Re:This is so cool (Score:3, Interesting)

      by SurfsUp ( 11523 )
      I'm wondering, do kernel developers use tools like vmware/plex86 to debug their running kernels ? It seems like we've come a long way since debugging with strategically placed printfs

      Vmware or plex86 could possibly be of some use, except that they're no good for debugging device drivers since the real devices are hidden behind a virtualization layer. For non-device driver work User Mode Linux [sourceforge.net] is a more lightweight solution, and tracks the latest development kernels much more closely. An increasing number of the core developers are using User Mode Linux regularly.

      For heavy debugging work on live kernels, kgdb is the perferred solution, with a serial cable link to a test machine. It takes a little more work to set this up and you need two machines. Kdb is a simpler debugger that can be patched into the kernel, useful for tracking down elusive kernel problems. It's included by default in SGI's XFS patch and pre-patched kernels.

      There are some great tools available including LTT, the Linux Trace Toolkit and various lock-monitoring patches. Unfortunately, most driver development is still being done by the printk/reboot method. If this is your preferred method, make sure you install a journalling filesystem unless you like spending most of your time watching fsck work.

      --
      Daniel
  • interview's Keith Owens

    There's no need for an apostrophe* in "interviews".

    * The superscript sign ( ' ) used to indicate the omission of a letter or letters from a word, the possessive case, or the plurals of numbers, letters, and abbreviations. (ex dictionary.com)
  • You need to fix the link to kbuild on sourceforge. =)
    • Hmm, on second thought... MOST of those things are broken. They all have "target=new" at the end... you guys really need to proofread this stuff.
  • I wonder will the Linux kernel writers seriously look at incorporating support for the Advanced Configuration and Power Interface (ACPI) for the 2.5 kernel test release.

    This could be very significant since ACPI allows for highly-automated system configuration, which is necessary if you want seamless hot-docking of external devices and ease of system upgrades.
    • yes, although nothing is in stone the hibernate and sleep mode will most likely be incorporated. by the time 2.5 is started ACPI will have already replaced APM in most systems. This should make supported systems the rule and not the rare exception they are today.

      http://content.techweb.com/wire/story/TWB2001040 4S 0002

      Here is a great article from the summit for more 2.5 info
      http://lwn.net/2001/features/KernelSummit/

    • See http://lwn.net/2001/features/KernelSummit/ [lwn.net] for fairly comprehensive list of changes planned on for 2.5.

      Third section up from the bottom is what they plan on doing for power management.

      Intel's ACPI implementation is out there now, and is being used by FreeBSD. They are currently waiting for the 2.5 fork to submit it for Linux. As a measure of the complexity of ACPI, consider that this implementation has "5-7 person years" of development work in it already, and does not yet have support for putting systems to sleep.

      So ACPI is important, despite its bulk. 2.4 already has the ACPI interpreter in it, but 2.5 will be where we see a truly working implementation of this standard.

  • by wowbagger ( 69688 ) on Saturday October 27, 2001 @10:24AM (#2487162) Homepage Journal
    While I can certainly understand Linus wanting to encourage would-be kernel developers to learn "a gram of analysis is worth a kilo of debugging", I do wish he would consider one area in which a kernel debugger is invaluable - hardware integration.

    "In theory, there is no difference between theory and practice. In practice, there is." In hardware development, there is the theory of what the hardware documentation says the chip will do, and then there is the practice of what it actually does. DMA's don't, interrupts stick, registers report old data. Obviously, you START by writing a user space app that pokes at the hardware (and this is one area in which Linux is head and shoulders above WinNT - there is NO way for a user space app to access hardware in NT, while in Linux you simply have to be root), but when you finally need to hook interrupts, allocate DMA buffers, etc., you need a debugger that can look at these events.

    Also, when porting to other CPUs, you sometimes need to see what is going on at the hardware level, and how it affects the drivers in the kernel.

    Yes, allowing debugging without analysis is bad. But throwing us back to the stone knives and bear skins era just to encourage hardier folks is an overreaction. Sure, make a KDB kernel bitch and moan during startup. Make it only allow root access, not normal user access. Force all file systems to run in full sync mode. But please don't make debugging buggy hardware any harder than it needs to be.

    (Now, if only AMD would add a JTAG debugger to the Athlon chip, I'd be a happy man.)
    • (Now, if only AMD would add a JTAG debugger to the Athlon chip, I'd be a happy man.)

      Have you ever used jtag? It's the buggiest thing I've ever seen. I've used it a lot on PPC and what they always fail to mention is that the jtag controller has to know what rev mask you're using on the chip. If anything is amiss, you'll get random operation.Not to mention the way it badly interacts with caches. And also that most board manufacturers fsck their jtag ports up. Did I also fail to mention that the damn jtag controllers cost like 5-8 grand? In the end all it does is restrict development to groups willing to lay down big money for the priviledge of developing for your system.

      t.

      • Your experiences are quite a bit different from mine. On all the DSPs that I've worked with, the JTAG debugger was around US$1K or less.

        True, the debugger program has to know what mask device you are working with, but that is usually a simple matter of selecting the correct mask when the debugger is launched.

        Perhaps you are thinking of a full JTAG implementation, with full scan chain support et cetera. I am talking about a simple implementation like Motorola's Background debugging mode support.

        Besides, if you thing JTAG is flaky, try using a bond-out pod. I've not seen bond-outs for CPUs faster than about 20MHz, and those were always "stand on one foot, hold tongue just right, think happy thoughts, and don't breath while debugging" affairs.
  • Global Makefile! (Score:4, Insightful)

    by swillden ( 191260 ) <shawn-ds@willden.org> on Saturday October 27, 2001 @10:38AM (#2487175) Journal

    From the interview:

    ...kbuild 2.5 builds a global Makefile from Makefile.in fragments in each directory then compiles and links only the sections of code that absolutely need to be recompiled

    This is excellent, and I hope more open source projects start to go this way. It's been known for a while that recursive make is a bad idea [pcug.org.au] because it's inaccurate. Naive recursive makefile structures tend to miss stuff that needs to be built/installed and fixing that problem (usually with ugly hacks like make dep) generally results in building stuff that doesn't need to be built.

    What Keith describes is a nice solution that provides the benefits of recursive make without the problems: Use per-directory makefile fragments which can be maintained locally, but automatically generate a complete, tree-wide makefile that is actually used for the build.

    There are tools other than make that provide more elegant solutions, but given that they never seem to catch on, I'm happy to see that someone is applying the tool we have (make) correctly, for once.

    I'm looking forward to this one.

    • Thanks for the link. It was very convincing. My library project will soon have a single global Makefile, once I get that idea working with the autoconf stuff already in place.
  • I'm setting up a computer lab in my school using Redhat Linux. Its strictly for WEbsurfing, so I'm using IceWM and Netscape, which starts automatically, and I have an autologin script.
    I was using Redhat 7.1 with XFS, but now that Redhat 7.2 came out with Ext3, I'm considering switching, and re-ghosting the machines.
    Is Ext3 more stable than XFS?
    Is the Kernel that came with the SGI distro of Redhat 7.1 stable? Or should I switch to 7.2 with Ext3? Which would I be better off with in the long run? Which will run the longest? And survive the most power outages? (this is going in the 7-8th grade building).
    Thanks
    • If this machine is often powered off improperly, I would install ext3 on it rather than xfs. xfs has quite a few advantages over ext3, but "not losing data after hitting reset" is definitely not one of them.

      Basically, in ext3-speak, there are two kinds of metadata journaling. Writeback mode, supported by all the linux jfses, will guarantee that your directory and file structure is consistent after a hard crash, but it doesn't make any guarantees re: file data. This level of protection is about the same as ext2 + a really fast fsck on boot -- so you might see files with blocks from other files in reiserfs, for example. However, xfs is worse than that -- with its delayed allocation feature, you'll see entire files zeroed out after a crash. (See their mailing list archives for details.)

      ext3 supports this mode too, but the default is ordered mode, which forces stricter ordering on data writes. Data always goes to disk before file metadata is updated, so you'll either see the "right" data after a crash or the old data -- but never damaged data.

      AFAIK, ext3 is the only linux jfs with working ordered-mode support, though reiserfs apparently has patches in the works.
      • You are over exagerating some things here.

        The only way to guarantee that things do or don't get out to a drive is to run in a fully sync'd with the cache on the drive disabled. I do know that I can run in fully synchronous mode on XFS and I can guarantee that the write got out, but then your are throwing away all of your system cache and your system will be bogged to hell and back. Ext3 ordered mode is faster than XFS, Reiser, etc. because it essentially doesn't do journaling anymore (whereas the others, would write to the journal and then write the data out before the commit is done). When you really start to do any mildly heavy I/O this mode pukes over itself, since it requires all the data to get written to the drive before the transaction is considered commited. When you use either ordered or full data-journaled on ext3 you throw out all of your filesystem cache, and you better turn off the cache on your drive.

        A word of advice, *never* leave your drive cache on with ordered, turn off the power to the drive, and all those "supposedly commited writes that have been guaranteed to get out the drive are not there. Now you are completely screwed, you have to a *full* fsck of the entire fs, since ordered mode isn't journaled. If you are running in "data-journalling" mode on ext3 you do get the journal, and it still blocks the transaction until it gets written to disk, but it also has a journal meaning you get the 2 write hit just like XFS, Reiser, etc running in synchronous mode.

        So unless you are willing to take a performance hit ext3 gains you nothing over XFS, Reiser, JFS even then depending upon what you are doing it may be faster to run in full synchronous journal (on any one of them) with drive cache turned on making the ordered mode performance benefit nill. Any FS can guarantee that data will get out to the drive, I doubt any serious server will ever want to run that way. If you can safely assume that your system will stay up, ext3 performance in writeback sucks rocks compared to pretty much all the others. So the only benefit I see to ext3 (and admittedly it is a fairly significant one) is the ability to go from ext2 to ext3 without any data migration required.
        • So the only benefit I see to ext3 (and admittedly it is a fairly significant one) is the ability to go from ext2 to ext3 without any data migration required.

          Is it really that hard to convert from ext2 to ReiserFS or XFS? I've never tried it.

          I just installed WinXP and converted my 10 GB FAT32 partition to NTFS. The conversion took about 2 reboots and 10 minutes. It was totally automatic, with no input necessary on my part. Is is that much harder to convert in Linux?

        • ...after a series of filesystem corruption on four different Machines using different Versions of ReiserFS with many different Kernels from 2.4.2 to 2.4.12, with different SCSI disks as well as on several IDE drives, and systems ranging from a Dell Inspiron 8000 Notebook over some homegrown single PIII, dual PIII's on different Mobos to a Dell dual P4 Rambus system. For the last twenty years I have never seen something like this:

          After power cuts on frozen development systems it regularly happened that files written minutes ago were completely corrupted; they were there, but just garbage in them; what you have written explains what probably happened; however, it troubled me that files written minutes ago were affected. What really upset me to throw out ReiserFS on every machine was when after a crash every File I created within the last two hours was destroyed; I never thought a Filesystem might take out many hundred files with such a precision. Even if I would not blame ReiserFS for this disaster (I Do), I consider it as completely unacceptable that all this happened without the slightest warning; no entry in the syslog, no boot message, nothing. ReiserFS pretended everything is fine. Do you have any explanation for such an behaviour, and are such effects just the downside for using a journaling fs, or is it something ReiserFS specific ? What added to my loss of confidence into this ReiserFs was that a few months ago reiserfsck did core dump when I tried to repair a file system that showed strange behaviour, which I regarded as exceptional behavior at that time.

          For now I switched back to ext2 and feel pretty good to see a thorough filesystem check after a crash. I do not remember much trouble using XFS with IRIX, but I have no experience so far with any journaling fs on linux exept those mentioned above. So do You have any recommendation for a filesystem on a unstable development system, where I can not sacrifice too much performance, but need at least confidence into the integrity of my fs ? (I did not loose much data, but It easily takes a few hours to bring back a system from the backups, but an unnoticed damage to vital files can drive you crazy). p.

  • by shao ( 70467 )
    I am kind of interested about his words on PRCS.
    What exactly PRCS can do better than CVS in terms of maintaining multiple branches.

    Are there any known big open source project currently using PRCS?
  • ... u answer questions in a interview with links :)

    JA: Why does Linus refuse to include kdb?

    Keith Owens: http://www.lib.uaa.alaska.edu/linux-kernel/archive /2000-Week-36/0575.html

    JA: Why should it be included?

    Keith Owens: http://marc.theaimsgroup.com/?l=linux-kernel&m=968 65229622167&w=2
  • Its not good (Score:2, Insightful)

    by Bruj0 ( 114447 )
    Linus is right, kernel debugers are not a good thing. You learn to fix the simptons no the dissease. If you want to code for the kernel you better learn it the hard way. ie. lots of hours rebooting and thinking what went wrong.
    But its a good thing that a kernel debuger exists, it will help you understand how it works inside. But WONT help you FIX things.

    bruj0-
  • BTW, anyone know where I can find an ext3 patch for 2.4.13? I'm also looking for the maestro3 module. They come with RedHat 7.2, but only on 2.4.9...
  • I noticed that kernel 2.4.10 are later now have the selection kernel debugger, available under the kernel hacking. So does the kernel now have a kernel debugger

BLISS is ignorance.

Working...