Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

Interview With WOLK Creator Marc-Christian Peterse 64

Jeremy Andrews writes "KernelTrap has spoken with Marc-Christian Petersen, who originated the WOLK project in March of 2002. WOLK is the Working Overloaded Linux Kernel, a large set of nearly 450 useful patches applied against the current stable 2.4 Linux kernel tree. The project has recently expanded to offer a second 'secure' patchset, this one against the older stable 2.2 tree. In this interview, Marc-Christian Petersen tells the history behind WOLK and discusses many of the patches included."
This discussion has been archived. No new comments can be posted.

Interview With WOLK Creator Marc-Christian Peterse

Comments Filter:
  • These are fun to play with, and it's all bleeding edge, but I personally wouldn't use them obviously for production systems.

    I'm wondering though if anybody is, and if so what's the function? A lot of these patches have some very juicy features.

    • Production servers (Score:4, Informative)

      by cpeterso ( 19082 ) on Tuesday June 25, 2002 @05:47PM (#3765698) Homepage

      Actually, I disagree. I have found the WOLK kernels to contain a lot of the fixes and features we needed all in one convenient package. Of course, I stress tested the WOLK servers before putting them into the production server room. I would highly recommend anyone that is curious in the WOLK kernels to use them in a production environment.
      • by jaoswald ( 63789 )
        I'm not a kernel aficionado, but the one that gives me the willies is the Compressed RAM caching. That sounds like a gimmicky fix for "too cheap to buy real RAM."

        That and the quoted emphasis on MP3/audio performance seem like this package is not aimed at real production situations, but personal workstation.

        I realize that these features can be managed individually, but then what is the advantage over managing these by oneself?
        • Yeah, and the -O flag to gcc is a gimmicky fix for 'too cheap to buy a faster CPU', and disk caching is a nasty kludge for those who won't buy a faster disk, and...
          • Not to make too big a deal about this, but the -O flag is making an explicit trade-off between compilation speed, run-time safety, and run-time performance. Disk caching trades a fixed amount of RAM capacity against latency of disk operations.

            But using RAM to cache RAM is more tricky---you gain a bit of RAM capacity at the expense of CPU cycles to compress/expand the cached pages. Well, when are you going to be needing and using this RAM cache? When you are near the limits of your RAM capacity, and possibly about to start swapping. It seems to me that this is the worst time to involve more complexity in the workings of the kernel: How did you get to be near the limits of your RAM capacity? You are probably spawning additional processes, or adding to the computational load of the processes you have. If you have a server, this is probably happening because of an increase in external load. Plus, you are having the CPU touch all these pages which are being compressed because they aren't needed, blowing away all the stuff that was in the cache because it was actually needed for computation. Sure, the CPU in modern computers usually has lots of cycles to spare, but that's because the CPU--memory pathway is a bottleneck. Why try to cram more pages back and forth through the bottleneck to try to make use of the extra cycles? It seems just to make things worse!

            It just feels to my gut like this feature is most "useful" when all hell is about to break loose, and this feature would make hell break loose just a little bit sooner. That's what gives me the willies.

            But hey, if someone wants to explain in technical terms why I'm wrong, please enlighten me.
            • You're right - having compressed pages or a compressed swapfile is trading a lot of extra CPU time for a small reduction in disk activity. And on a busy server, that extra CPU usage might be just what you don't want. However, it's always been the case that disks are extremely slow compared to the processor, and as far as I know the gap is still growing, so this kind of tradeoff is often worthwhile. Compare the increase in complexity of filesystems, again adding a lot of code and a lot of complexity just to save a few disk accesses.

              Also the disk caching isn't as clear-cut as you might think: OSes such as Linux will sometimes swap out processes' pages in order to use more memory for filesystem buffer cache, so there are some subtle tradeoffs there too.

              But a slowdown in CPU _usually_ isn't as bad a deal as thrashing by running out of RAM. If your CPU gets bogged down then everything runs more slowly, but more slowly in a predictable kind of way. Whereas thrashing can reduce the whole system to a crawl, not just three times slower but a thousand times slower. Still this is all just speculation and we'd have to see some benchmarks. Most likely if your server ever needs to swap so much that it's worthwhile to compress pages, you have inadequate hardware anyway. (The exception being login servers for X terminals, perhaps.)
    • It's a very quick way to test all the patches. Then only apply the ones with features you want to your production kernel.
      • <EM> It's a very quick way to test all the patches. Then only apply the ones with features you want to your production kernel.</EM>

        That does not necessarily work. You may not see the problems just by running the kernel for a while. Problems may show up after several days of use, under certain specific conditions. That's why testing by several people is interesting

        The kernel developers will usually tell you that you shouldn't expect a even a stable (2.4, for example) kernel tree from kernel.org to be stable -- you should trust the testing made by your Linux distro. (And since WOLK is a set of patches to the kernel.org tree...)

        The "stable" in kernel.org means that tree is not neing used to test new ideas, and instead has a focus on "getting usable". It does not mean it's "solid and ready for producion use". There are several cases of broken 2.4 ("stable") kernels.
    • WOLK is not for production use, but the -secure 2.2 kernel is (I've asked Marc and he later included this in his announcement to lkml).

      I'm using it on a server and it works great.
  • Who use redhat, maybe an RPM?
    • by Anonymous Coward
      Your red hat kernel already has (I believe) quite a few patches put over the standard, vanilla kernel. Of course, Redhat's purpose is to make the kernel as stable as possible, not to add as many odd features as possible.
    • by CanadaDave ( 544515 ) on Tuesday June 25, 2002 @05:39PM (#3765660) Homepage
      It's so easy to compile the kernel from source. Redhat has a nice PDF tutorial about this somewhere on their website. Sorry I can't get the link for your right now. It is actually a chapter in their user manual I think. I learned how to do this when I was a newbie and it wasn't too bad...even for a newbie.
    • Download the linux-2.4.18-WOLK3.x-fullkernel.tar.bz2 file into /usr/src

      type:

      tar jxvf linux...whatever...tar.bz2

      to unzip it

      type:

      mv linux-2.4.18-WOLKwhatever linux

      to rename the directory it creates to "linux", so other stuff you might build can use it.

      type:

      cd linux
      make menuconfig dep bzImage modules modules_install

      Play with all the features! Make sure in the first menu item, you enable "experimental features".

      Then if you don't die with an "error 1" or something similar, run Linuxconf, goto

      Boot --> Lilo --> Add a kernel I've just compiled

      and play!

      Whatever you do, MAKE SURE you don't overwrite your current (*working*!) lilo/kernel entry! Use a different name.

      I've relied on WOLK for a lot of neat drivers and speed/reliability fixes I just can't get if I try and patch the bare kernel myself.

      WOLK is the most valuable project out there to the enterprise... it *REALLY* makes Linux kick butt when it comes to server-room type hardware. Hats off to everyone involved.

      mindslip
  • (First) fork? (Score:3, Interesting)

    by netsharc ( 195805 ) on Tuesday June 25, 2002 @05:49PM (#3765712)
    Is this the first big "unofficial" fork of the Linux kernel? We've had different trees, but those have been maintained by people who are very close to kernel-development (I only know of AC, but I belive there are more), but this tree seems to have come out of nowhere? I hope this isn't the event that marks the beginning of Linux following of the old Unix history.

    I guess that's the idea of a modular and open-source kernel, you can add things you want, remove things you don't want, but somehow adding patches from "outsiders" make me feel I'm not running a *real* Linux system - The way Linus Intended(TM).

    OTOH, My LFS [linuxfromscratch.org] system is unique, with changes that make it different to standard Linux systems - including patches in the kernel - and I love the thing, so I guess there's no need to feel "guilt" over it.
    • Re:(First) fork? (Score:2, Informative)

      First ? Nope - Before Wolk was Folk (the Functionally Overloaded Linux Kernel).

      Fork ? Nope, just some some guy with the help of the few, to bring to the unwashed masses the kernel that the people with the time to hunt down and find patches have.

      It's a patch set to the main kernel - Just with a crap load of really good stuff added on. Stuff that is out there, in the great open wastes of mailing lists.

      ps.. WOLK is a great topping for your LFS cake.
  • ..had this many features!

    It's an orgy of features!

  • by Bollie ( 152363 ) on Tuesday June 25, 2002 @06:02PM (#3765765)
    This is just my personal wishlist:

    1) A standard hardware acceleration layer for 2D and 3D cards, something we can ask the NVidia people to add to their drivers and code equivalents for other cards.
    2) Wine intergration. Routing Win32 messages through the kernel would be kinda nice.
    3) Java acceleration. Hooks for some standard Java functions: this would help a lot in some specific embedded situations.
    4) ACL support for ports and stuff (like the security patches).
    5) A standard "driver package" format containing the kernel module, user-mode tools and installation instructions for binary only (yecc) drivers. (One driver fits all distros!)

    I've been working with Linux-based systems since '97 and I have to say, it's just getting better and better. I'm sure a lot of the above is would actually not be good in most kernels, but since one of Linux's strong points is scalability, I'd really like to see Linux take on the desktop, handheld and server market!
    • by Pemdas ( 33265 ) on Tuesday June 25, 2002 @06:58PM (#3766003) Journal
      All of these are IMHO, of course...

      1) A standard hardware acceleration layer for 2D and 3D cards, something we can ask the NVidia people to add to their drivers and code equivalents for other cards.

      I agree with this, and hope that the future holds running XFree86 (or Berlin or whatever) on top of the standard Framebuffer/DRM interface. It's already possible to run XFree86 on the framebuffer driver, but the system needs to have kinks ironed out; most of that work is happening on non-x86 ports, though.

      2) Wine intergration. Routing Win32 messages through the kernel would be kinda nice. 3) Java acceleration. Hooks for some standard Java functions: this would help a lot in some specific embedded situations.

      No. Just no. There's no significant gain to be had from putting any of that in the kernel. Furthermore, this is userspace stuff. The fact that the kernel has good stuff in it doesn't mean everything should go into the kernel...

      4) ACL support for ports and stuff (like the security patches).

      I've mixed feelings about this. It seems that the current security model is fine for the typical user, even though ACLs are really wonderful for larger servers with more nebulous administration structures. Overall, though, I think you're right, this stuff should get in eventually.

      5) A standard "driver package" format containing the kernel module, user-mode tools and installation instructions for binary only (yecc) drivers. (One driver fits all distros!)

      This is a complete don't care for me. If it comes with a binary-only module, I don't buy it. I'm still running a Matrox G400 at home as a result. Generally, I think kernel developer sentiment is turning more and more negative towards binary-only drivers--I wouldn't expect the community to do much, if anything, to make it easier for such developers.

      • > > Wine intergration.
        > this is userspace stuff

        Heck, it's "userspace" stuff on Windows NT/2000/XP, as well. CSRSS (Client-Server Runtime Subsystem) is the Win32 operating environment server, and is a native NT application. It talks directly to the NT kernel and exports the Win32 API. "Windows applications" talk to CSRSS. Windows NT/2000/XP is a lot like "Wine running on VMS."
      • 5) A standard "driver package" format containing the kernel module, user-mode tools and installation instructions for binary only (yecc) drivers. (One driver fits all distros!)

        This is a complete don't care for me. If it comes with a binary-only module, I don't buy it. I'm still running a Matrox G400 at home as a result. Generally, I think kernel developer sentiment is turning more and more negative towards binary-only drivers--I wouldn't expect the community to do much, if anything, to make it easier for such developers.

        I agree. A better solution would be to have a mechanism in place that allows a driver module to be compiled from source easily, without the need of having a previous compiled kernel. Maybe this mechanism exists, I don't know. I've never seen it though.

        Ideal scenario:
        A new device is being developed. The manufacturer writes a driver for all relevant kernel versions and mails it to Linus. At some point after this, the drivers shows up in the relevant kernels. Now the new hardware hits the market with a source-only driver and a compile script that figures out which kernel is present on the system for the people who do not run the newest kernel.

        If a scenario like this is formalized, it will give hardware manufacturers more incentive to write drivers for linux, since it will be easier to garantee that the hardware will work with a stable kernel.
    • Read/Write NTFS.

      I would _use_ that. Currently to transfer things from
      one OS to the other on a WinXP/Linux dual-boot system
      is a pain (I'm using an smb share on a different PC on
      the network...)

      Yeah, I know WinXP can theoretically use FAT, but I
      don't (call me crazy) particularly want to have to
      reinstall it, and it came preinstalled on NTFS.
    • 1) A standard hardware acceleration layer for 2D and 3D cards, something we can ask the NVidia people to add to their drivers and code equivalents for other cards.

      The problem with this is that then misbehaving 2D and 3D programs can mess with your kernel. There's already a "standard hardware acceleration layer" for 3D called DRI, the Direct Rendering Infrastructure (I'm guessing you know that, but some people may not). I personally have hard locked-up my system and even corrupted my hard drive by compiling and testing buggy code against the DRI-accelerated version of Mesa. I'm not even sure I was running as root at the time! (If you allow non-root users to use DRI for 3D graphics by seting "Mode 0666" in the "DRI" section of XF86config-4, can't they do the same kind of damage?) For this reason, I would argue that kernel-level graphics acceleration is kind of dangerous. Perhaps coding DRI differently could prevent these problems, I don't know (possibly at the cost of performance?)

      For 3D, I think that having kernel-level acceleration is inevitable. You just can't get good performance even on the fastest video cards without some sort of kernel-level rendering interface. Even the non-DRI nVidia drivers use a kernel module (And sure enough, I've hard locked-up my Linux box using both DRI-based drivers and nVidia's drivers, for different video cards.) Even "good" performance is, of course, not really sufficient. Fancier and more detailed 3D rendering is going to push video cards to the limit for years to come. So for now, we're going to need all the performance we can get, even if that means risky kernel-level 3D-acceleration.

      For 2D, on the other hand, who needs the extra performance? Everything 2D that I do is already more than fast enough even without kernel-level acceleration. I say just let X be the 2D standard for Linux. It's already becoming the de-facto standard, as SVGAlib, GGI and other 2D rendering libraries seem to be getting used less and less. And even if it's not as fast as it could be, X must be safer than using the kernel framebuffer or some other kernel-level 2D.

      • For 3D, I think that having kernel-level acceleration is inevitable.

        It's not required. The utah-glx project which provided 3d support for the XFree86 3.x series did not need a kernel driver. The DRI kernel driver is needed for properly storing the video card state when switching tasks to allow multiple programs to make use of the video card. The real 3d stuff is still in the XFree driver.
  • Interesting... Since Linux was created since minix changes could *only* be distributed as patches (modified minix source code couldn't be redistributed).
    • Linux was created since minix changes could *only* be distributed as patches (modified minix source code couldn't be redistributed).

      It's the same deal with QPL [trolltech.com] software such as much of PHP 4.

  • I use wolk for Xfs + Alsa integration its realy nice that I dont have to recompile alsa when I recompile my kernel because its integrated into kernel! And nice xfs integration too. No need to mention nice debian boot logo ( although I use slackware :/ ).And much more! Anyway if you want to have a look @ it check www.sourforge.net/projects/wolk or http://wolk.sf.net
  • He says "the O(1) scheduler really only helps if you have more than ~200 processes, below that, it slows everything down a lot" (or something similar).

    That doesn't make sense. If the new scheduler is O(1), it should be faster in all cases, and under no circumstances slow things down.
    • Hi there,

      it makes sense. Please read the following URL cause its bigger than to fit in here as a comment :(

      https://sourceforge.net/forum/forum.php?thread_i d= 697422&forum_id=161196

      There are 10 issues explained by William Lee Irwin III (wli). I talked with wli on the internet relay chat.

      ciao, Marc
    • If the new scheduler is O(1), it should be faster in all cases, and under no circumstances slow things down.

      This reflects a fundamental misunderstanding of what asymptotic complexity statements like O(1), O(n), etc. mean.

      An O(1) algorithm is not guaranteed to be faster than an O(n) algorithm for all values of n, only for values of n that are "large". (Yes, it's meant to be vague.)

      Consider two algorithms that accept an input n bytes long. One takes 20 seconds to run, the other takes 2n seconds to run. The first is O(1), the second is O(n), but the first is slower than the second for n 10.
    • It does make sense. The math behind why has been pointed
      out by others, so I'll just offer an analogy...

      Is it faster to drive your car, or walk?

      Depends how far you're going. If you're going several
      miles, it's going to be a lot faster to drive. If you're
      going across the street, it's faster to walk, because you
      avoid the overhead of going out of your way to where the
      car is parked, cleaning the snow off the windshield,
      unlocking and starting the car, parking the car, and
      getting out. When I was in high school, I walked. Once
      a friend who was just arriving offered me a ride from the
      end of my driveway. We ended up parking further from the
      school door than where he picked me up.

      The O(1) scheduler is the car. It'll get you to that
      college in the next state faster, but it's no help for
      getting across the street.
    • No this is called Big-O notation in Computer Science and until a exact value always O(n) algorithms is better than O(1) algorithms. But after that value while O(n) algorithms starts to get bigger more O(1) algortihms doesnt do that and get bigger slowly.
  • Seams like a great idea to me, but tbis isn't a new idea at least i don't think so, did yoy see the distibution kernels (esp. Mandrake) it's very patch happy (not thats a bad thing because ther are ton of fixes). I wonder if they skip with XFS, i wish XFS would be merged into the kernel because it's very stable (i run it on many systems) unlike raiser (which btw is in the offical tree) it's fast, and why now :) But as far as i know it's not even in the 2.5.x tree (correct me if i'm wrong), and those guys are probly having a hard time makeing patches both agains 2.5.x and 2.4.x when new versions come out.
  • by yerricde ( 125198 ) on Tuesday June 25, 2002 @09:24PM (#3766511) Homepage Journal

    Compressed caching is the introduction of a new level into the virtual memory hierarchy. Specifically, RAM is used to store both an uncompressed cache of pages in their 'natural' encoding, and a compressed cache of pages in some compressed format. By using RAM to store some number of compressed pages, the effective size of RAM is increased, and this way the number of page faults that must be serviced by very slow hard disks is decreased.

    This is exactly the technique that Connectix's "RAM Doubler", a replacement for the Macintosh System 7 virtual memory manager, used way back in 1996. I wonder if Connectix has a patent on it.

    SuperMount has the ability to access your cd's/floppies on the fly without need to mount / umount them every time.

    Mac OS has automounted removable media since 1984.

    It's good to see that Linux is progressing as a kernel for a workstation OS. But even its major proponents admit that it has some catching up to do. WOLK is a step in the right direction.

  • I seem to recall seeing freshmeat posts about the FOLK kernel with similar aims (ie linux+every possible patch, look n' see what breaks). Anybody know if this is the same project?

    Either way, it seems a good idea so He-Who-Doesn't-Scale has a good idea of which bleeding edge patches sorta work, which ones work great but are too specialised, and which ones barf & die spectacularly :-)

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...