Interview With WOLK Creator Marc-Christian Peterse 64
Jeremy Andrews writes "KernelTrap has spoken with Marc-Christian Petersen, who originated the WOLK project in March of 2002. WOLK is the Working Overloaded Linux Kernel, a large set of nearly 450 useful patches applied against the current stable 2.4 Linux kernel tree. The project has recently expanded to offer a second 'secure' patchset, this one against the older stable 2.2 tree.
In this interview, Marc-Christian Petersen tells the history behind WOLK and discusses many of the patches included."
Not for production? (Score:1)
I'm wondering though if anybody is, and if so what's the function? A lot of these patches have some very juicy features.
Production servers (Score:4, Informative)
Actually, I disagree. I have found the WOLK kernels to contain a lot of the fixes and features we needed all in one convenient package. Of course, I stress tested the WOLK servers before putting them into the production server room. I would highly recommend anyone that is curious in the WOLK kernels to use them in a production environment.
Re:Production servers (Score:2, Insightful)
That and the quoted emphasis on MP3/audio performance seem like this package is not aimed at real production situations, but personal workstation.
I realize that these features can be managed individually, but then what is the advantage over managing these by oneself?
Re:Production servers (Score:1)
Re:Production servers (Score:1)
But using RAM to cache RAM is more tricky---you gain a bit of RAM capacity at the expense of CPU cycles to compress/expand the cached pages. Well, when are you going to be needing and using this RAM cache? When you are near the limits of your RAM capacity, and possibly about to start swapping. It seems to me that this is the worst time to involve more complexity in the workings of the kernel: How did you get to be near the limits of your RAM capacity? You are probably spawning additional processes, or adding to the computational load of the processes you have. If you have a server, this is probably happening because of an increase in external load. Plus, you are having the CPU touch all these pages which are being compressed because they aren't needed, blowing away all the stuff that was in the cache because it was actually needed for computation. Sure, the CPU in modern computers usually has lots of cycles to spare, but that's because the CPU--memory pathway is a bottleneck. Why try to cram more pages back and forth through the bottleneck to try to make use of the extra cycles? It seems just to make things worse!
It just feels to my gut like this feature is most "useful" when all hell is about to break loose, and this feature would make hell break loose just a little bit sooner. That's what gives me the willies.
But hey, if someone wants to explain in technical terms why I'm wrong, please enlighten me.
Re:Production servers (Score:1)
Also the disk caching isn't as clear-cut as you might think: OSes such as Linux will sometimes swap out processes' pages in order to use more memory for filesystem buffer cache, so there are some subtle tradeoffs there too.
But a slowdown in CPU _usually_ isn't as bad a deal as thrashing by running out of RAM. If your CPU gets bogged down then everything runs more slowly, but more slowly in a predictable kind of way. Whereas thrashing can reduce the whole system to a crawl, not just three times slower but a thousand times slower. Still this is all just speculation and we'd have to see some benchmarks. Most likely if your server ever needs to swap so much that it's worthwhile to compress pages, you have inadequate hardware anyway. (The exception being login servers for X terminals, perhaps.)
you can test all the patches. (Score:2)
Re:you can test all the patches. (Score:1)
That does not necessarily work. You may not see the problems just by running the kernel for a while. Problems may show up after several days of use, under certain specific conditions. That's why testing by several people is interesting
The kernel developers will usually tell you that you shouldn't expect a even a stable (2.4, for example) kernel tree from kernel.org to be stable -- you should trust the testing made by your Linux distro. (And since WOLK is a set of patches to the kernel.org tree...)
The "stable" in kernel.org means that tree is not neing used to test new ideas, and instead has a focus on "getting usable". It does not mean it's "solid and ready for producion use". There are several cases of broken 2.4 ("stable") kernels.
Re:Not for production? (Score:2, Informative)
I'm using it on a server and it works great.
What about the newbies (Score:1)
Re:What about the newbies (Score:1, Insightful)
Re:What about the newbies (Score:4, Interesting)
Re:What about the newbies (Score:2)
type:
tar jxvf linux...whatever...tar.bz2
to unzip it
type:
mv linux-2.4.18-WOLKwhatever linux
to rename the directory it creates to "linux", so other stuff you might build can use it.
type:
cd linux
make menuconfig dep bzImage modules modules_install
Play with all the features! Make sure in the first menu item, you enable "experimental features".
Then if you don't die with an "error 1" or something similar, run Linuxconf, goto
Boot --> Lilo --> Add a kernel I've just compiled
and play!
Whatever you do, MAKE SURE you don't overwrite your current (*working*!) lilo/kernel entry! Use a different name.
I've relied on WOLK for a lot of neat drivers and speed/reliability fixes I just can't get if I try and patch the bare kernel myself.
WOLK is the most valuable project out there to the enterprise... it *REALLY* makes Linux kick butt when it comes to server-room type hardware. Hats off to everyone involved.
mindslip
Re:sweet (Score:2, Interesting)
> Does anyone know why this is not in the main kernel.
Didn't know it wasn't... (Guess you know which distro I use.)
> This is a must have feature for linux on the desktop.
Agreed. _Especially_ for expansion into the non-geek
end-user segment of the desktop market (the largest
segment).
> It has been included in distros like mandrake for a long
> time, so it should be pretty stable.
It's been stable in my experience.
Re:sweet (Score:1)
(First) fork? (Score:3, Interesting)
I guess that's the idea of a modular and open-source kernel, you can add things you want, remove things you don't want, but somehow adding patches from "outsiders" make me feel I'm not running a *real* Linux system - The way Linus Intended(TM).
OTOH, My LFS [linuxfromscratch.org] system is unique, with changes that make it different to standard Linux systems - including patches in the kernel - and I love the thing, so I guess there's no need to feel "guilt" over it.
Re:(First) fork? (Score:2, Informative)
Fork ? Nope, just some some guy with the help of the few, to bring to the unwashed masses the kernel that the people with the time to hunt down and find patches have.
It's a patch set to the main kernel - Just with a crap load of really good stuff added on. Stuff that is out there, in the great open wastes of mailing lists.
ps.. WOLK is a great topping for your LFS cake.
I wish some of my games... (Score:1)
..had this many features!
It's an orgy of features!
Things I'd like to see in the kernel... (Score:5, Interesting)
1) A standard hardware acceleration layer for 2D and 3D cards, something we can ask the NVidia people to add to their drivers and code equivalents for other cards.
2) Wine intergration. Routing Win32 messages through the kernel would be kinda nice.
3) Java acceleration. Hooks for some standard Java functions: this would help a lot in some specific embedded situations.
4) ACL support for ports and stuff (like the security patches).
5) A standard "driver package" format containing the kernel module, user-mode tools and installation instructions for binary only (yecc) drivers. (One driver fits all distros!)
I've been working with Linux-based systems since '97 and I have to say, it's just getting better and better. I'm sure a lot of the above is would actually not be good in most kernels, but since one of Linux's strong points is scalability, I'd really like to see Linux take on the desktop, handheld and server market!
Re:Things I'd like to see in the kernel... (Score:4, Interesting)
1) A standard hardware acceleration layer for 2D and 3D cards, something we can ask the NVidia people to add to their drivers and code equivalents for other cards.
I agree with this, and hope that the future holds running XFree86 (or Berlin or whatever) on top of the standard Framebuffer/DRM interface. It's already possible to run XFree86 on the framebuffer driver, but the system needs to have kinks ironed out; most of that work is happening on non-x86 ports, though.
2) Wine intergration. Routing Win32 messages through the kernel would be kinda nice. 3) Java acceleration. Hooks for some standard Java functions: this would help a lot in some specific embedded situations.
No. Just no. There's no significant gain to be had from putting any of that in the kernel. Furthermore, this is userspace stuff. The fact that the kernel has good stuff in it doesn't mean everything should go into the kernel...
4) ACL support for ports and stuff (like the security patches).
I've mixed feelings about this. It seems that the current security model is fine for the typical user, even though ACLs are really wonderful for larger servers with more nebulous administration structures. Overall, though, I think you're right, this stuff should get in eventually.
5) A standard "driver package" format containing the kernel module, user-mode tools and installation instructions for binary only (yecc) drivers. (One driver fits all distros!)
This is a complete don't care for me. If it comes with a binary-only module, I don't buy it. I'm still running a Matrox G400 at home as a result. Generally, I think kernel developer sentiment is turning more and more negative towards binary-only drivers--I wouldn't expect the community to do much, if anything, to make it easier for such developers.
Re:Things I'd like to see in the kernel... (Score:2)
> this is userspace stuff
Heck, it's "userspace" stuff on Windows NT/2000/XP, as well. CSRSS (Client-Server Runtime Subsystem) is the Win32 operating environment server, and is a native NT application. It talks directly to the NT kernel and exports the Win32 API. "Windows applications" talk to CSRSS. Windows NT/2000/XP is a lot like "Wine running on VMS."
Re:Things I'd like to see in the kernel... (Score:2, Insightful)
This is a complete don't care for me. If it comes with a binary-only module, I don't buy it. I'm still running a Matrox G400 at home as a result. Generally, I think kernel developer sentiment is turning more and more negative towards binary-only drivers--I wouldn't expect the community to do much, if anything, to make it easier for such developers.
I agree. A better solution would be to have a mechanism in place that allows a driver module to be compiled from source easily, without the need of having a previous compiled kernel. Maybe this mechanism exists, I don't know. I've never seen it though.
Ideal scenario:
A new device is being developed. The manufacturer writes a driver for all relevant kernel versions and mails it to Linus. At some point after this, the drivers shows up in the relevant kernels. Now the new hardware hits the market with a source-only driver and a compile script that figures out which kernel is present on the system for the people who do not run the newest kernel.
If a scenario like this is formalized, it will give hardware manufacturers more incentive to write drivers for linux, since it will be easier to garantee that the hardware will work with a stable kernel.
One thing I'd really like to see in the kernel... (Score:1)
I would _use_ that. Currently to transfer things from
one OS to the other on a WinXP/Linux dual-boot system
is a pain (I'm using an smb share on a different PC on
the network...)
Yeah, I know WinXP can theoretically use FAT, but I
don't (call me crazy) particularly want to have to
reinstall it, and it came preinstalled on NTFS.
Re:Things I'd like to see in the kernel... (Score:2)
The problem with this is that then misbehaving 2D and 3D programs can mess with your kernel. There's already a "standard hardware acceleration layer" for 3D called DRI, the Direct Rendering Infrastructure (I'm guessing you know that, but some people may not). I personally have hard locked-up my system and even corrupted my hard drive by compiling and testing buggy code against the DRI-accelerated version of Mesa. I'm not even sure I was running as root at the time! (If you allow non-root users to use DRI for 3D graphics by seting "Mode 0666" in the "DRI" section of XF86config-4, can't they do the same kind of damage?) For this reason, I would argue that kernel-level graphics acceleration is kind of dangerous. Perhaps coding DRI differently could prevent these problems, I don't know (possibly at the cost of performance?)
For 3D, I think that having kernel-level acceleration is inevitable. You just can't get good performance even on the fastest video cards without some sort of kernel-level rendering interface. Even the non-DRI nVidia drivers use a kernel module (And sure enough, I've hard locked-up my Linux box using both DRI-based drivers and nVidia's drivers, for different video cards.) Even "good" performance is, of course, not really sufficient. Fancier and more detailed 3D rendering is going to push video cards to the limit for years to come. So for now, we're going to need all the performance we can get, even if that means risky kernel-level 3D-acceleration.
For 2D, on the other hand, who needs the extra performance? Everything 2D that I do is already more than fast enough even without kernel-level acceleration. I say just let X be the 2D standard for Linux. It's already becoming the de-facto standard, as SVGAlib, GGI and other 2D rendering libraries seem to be getting used less and less. And even if it's not as fast as it could be, X must be safer than using the kernel framebuffer or some other kernel-level 2D.
Re:Things I'd like to see in the kernel... (Score:2, Informative)
It's not required. The utah-glx project which provided 3d support for the XFree86 3.x series did not need a kernel driver. The DRI kernel driver is needed for properly storing the video card state when switching tasks to allow multiple programs to make use of the video card. The real 3d stuff is still in the XFree driver.
patches (Score:1)
QPL (Score:1)
Linux was created since minix changes could *only* be distributed as patches (modified minix source code couldn't be redistributed).
It's the same deal with QPL [trolltech.com] software such as much of PHP 4.
Wolk Xfs Alsa 0.9 Yummie Yummie (Score:1)
Uhrm. (Score:1)
That doesn't make sense. If the new scheduler is O(1), it should be faster in all cases, and under no circumstances slow things down.
Re:Uhrm. (Score:1)
it makes sense. Please read the following URL cause its bigger than to fit in here as a comment
https://sourceforge.net/forum/forum.php?thread_
There are 10 issues explained by William Lee Irwin III (wli). I talked with wli on the internet relay chat.
ciao, Marc
Re:Uhrm. (Score:1)
This reflects a fundamental misunderstanding of what asymptotic complexity statements like O(1), O(n), etc. mean.
An O(1) algorithm is not guaranteed to be faster than an O(n) algorithm for all values of n, only for values of n that are "large". (Yes, it's meant to be vague.)
Consider two algorithms that accept an input n bytes long. One takes 20 seconds to run, the other takes 2n seconds to run. The first is O(1), the second is O(n), but the first is slower than the second for n 10.
Re:Uhrm. (Score:1)
out by others, so I'll just offer an analogy...
Is it faster to drive your car, or walk?
Depends how far you're going. If you're going several
miles, it's going to be a lot faster to drive. If you're
going across the street, it's faster to walk, because you
avoid the overhead of going out of your way to where the
car is parked, cleaning the snow off the windshield,
unlocking and starting the car, parking the car, and
getting out. When I was in high school, I walked. Once
a friend who was just arriving offered me a ride from the
end of my driveway. We ended up parking further from the
school door than where he picked me up.
The O(1) scheduler is the car. It'll get you to that
college in the next state faster, but it's no help for
getting across the street.
Re:Uhrm. (Score:1)
Distro kernels, XFS (Score:1)
Catching up with Macintosh of 1996 (Score:3, Informative)
Compressed caching is the introduction of a new level into the virtual memory hierarchy. Specifically, RAM is used to store both an uncompressed cache of pages in their 'natural' encoding, and a compressed cache of pages in some compressed format. By using RAM to store some number of compressed pages, the effective size of RAM is increased, and this way the number of page faults that must be serviced by very slow hard disks is decreased.
This is exactly the technique that Connectix's "RAM Doubler", a replacement for the Macintosh System 7 virtual memory manager, used way back in 1996. I wonder if Connectix has a patent on it.
SuperMount has the ability to access your cd's/floppies on the fly without need to mount / umount them every time.
Mac OS has automounted removable media since 1984.
It's good to see that Linux is progressing as a kernel for a workstation OS. But even its major proponents admit that it has some catching up to do. WOLK is a step in the right direction.
WOLK vs FOLK (Score:1)
Either way, it seems a good idea so He-Who-Doesn't-Scale has a good idea of which bleeding edge patches sorta work, which ones work great but are too specialised, and which ones barf & die spectacularly