Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Andrew Morton And The Low-Latency Kernel Patch 151

An Anonymous Coward writes: "KernelTrap has interviewed Linux kernel hacker Andrew Morton, author of the low-latency patch. Though his patch has received less attention than Robert Love's preemptible kernel patch (recently merged into the 2.5 kernel), it results in quite significantly lower latencies. The interview is quite interesting, delving into the low-latency patch, explaining how it works and the differences between it and the preempt patch. He also talks about his ext3 work, porting that journaling filesystem from the older stable 2.2 kernel to the current stable 2.4 kernel."
This discussion has been archived. No new comments can be posted.

Andrew Morton And The Low-Latency Kernel Patch

Comments Filter:
  • So... just how Realtime can the Linux Kernel be? Has anybody compared these latencies with WindowsNT, which according to my former employer, has a very low latency.

    Is there a formal difference between low latency and a realtime OS?

    What about the Timesys kernel patches? How do things match up to QNX another realtime OS??

    • Re:realtime? (Score:5, Informative)

      by Zenki ( 31868 ) on Saturday February 16, 2002 @09:04PM (#3020203)
      A realtime os, which usually has low latency, has nothing about the duration of latency, but rather a guarantee of latency.

      For example, suppose you send a packet off into the internet, a realtime os would guarantee that the packet was sent within x number of nanoseconds. A realtime os would main this guarantee, regardless of the load on the system, the size of the packet, etc.
      • A realtime os, which usually has low latency, has nothing about the duration of latency, but rather a guarantee of latency.

        Exactly. There's also a corollary, which many people miss: realtime does not necessarily equate to high performance. Sometimes, you do things to enforce a bound on the worst case that actually make the average case worse. Anybody who has read Hennessy and Patterson should remember the formula for the value of an optimization (paraphrased because I don't have my copy handy):

        Value = (Frequency * Benefit)

        - ((1 - Frequency) * Penalty)

        Now consider a CPU cache. What a lot of people forget is that there is such a thing as a cache miss penalty, because in most systems hit rates are so high that the second half of the equation above remains negligible. However, a realtime system designer has to be pessimistic and assume very low hit rates. Only accesses that can be absolutely proven to be hits - e.g. repeated access not separated by too many other accesses including those from higher-priority tasks - can be counted, and all others must be considered misses. In practice, that sort of proof is usually too much of a pain in the ass so every access is assumed to be a miss. Since cache misses are actually more expensive than uncached accesses (the miss penalty), it's not uncommon to find that a critical code path has some possibility of missing its deadline if accesses are through the cache, but it can be guaranteed to complete in time with uncached accesses. So the cache gets turned off. Obviously, performance will suck, but at least it will suck predictably and that's the more important concern in realtime. For similar reasons, realtime systems often preallocate resources that then sit idle, because they can't afford to contend for them later.

        The above examples should demonstrate why realtime systems might actually perform worse than general-purpose systems. Trying to make system behavior more predictable and responsive is great, and to that end we should all welcome the low-latency and preemption patches, but treating "realtime" as some kind of mantra for "better performance" is an illusion.

    • Re:realtime? (Score:5, Insightful)

      by Error27 ( 100234 ) <error27 AT gmail DOT com> on Saturday February 16, 2002 @09:36PM (#3020284) Homepage Journal
      The difference is that hard real time doesn't mean low latency it just means that there is a _guaranteed_ maximum latency.

      Soft real time means that you can almost gaurantee the latency. Generally, of course, you want these latencies to be pretty small. Soft real time is for when you use check the "use real time where available" option on xmms and run it under sudo.

      I hear that Linux (probably with patches) is a little better than windows and a little worse than os X for latency.

    • Re:realtime? (Score:5, Informative)

      by s390 ( 33540 ) on Saturday February 16, 2002 @09:47PM (#3020316) Homepage
      Is there a formal difference between low latency and a realtime OS?

      Yes. A realtime OS _guarantees_ that certain events trigger defined responses within specified times. A realtime OS is almost by definition an embedded OS, i.e., its hardware is rigorously specific and very tightly bound. A realtime OS also typically provides a very limited set of functions, as opposed to a general purpose OS. A low-latency OS, on the other hand, provides generalized structures for 1st-level/2nd-level interrupt handlers, real/virtual memory management, and facilities for locking, preemptive-priority dispatching, etc., but offers low latency on a merely best-efforts basis depending upon what all happens to be inflight at the moment. See the difference?

      Examples of realtime systems: automotive control systems including engine power/emissions management, suspension and braking management, even airbag controls; aircraft fly-by-wire systems that control aerodynamically unstable airframes.

      Examples of low-latency systems: mainframes - if you're a high-priority system task, you get _very_ low latencies - but exact timings aren't guaranteed in all situations.

      • You are incorrect in one thing. No aircraft fly by wire or autopilot system uses an OS. it is native code running on the machine... the APP is the only thing running on that hardware. No OS used. no OS needed. Do a study on the web starting with the Apollo flight computer, as this was used up until a few years ago (yes code and designs from 1960) in military and commercial aircraft.

        you dont waste time and stability in using an OS when one is not needed. and for 95% of the true embedded systems out there, no os is used or needed.
        • Well, except that someone as part of the app will have written a task scheduler and some input and output processing functions. Which means that there _is_ an OS in there effectively, it's just that it doesn't come in separately-installed parts but instead is all compiled together.

          Realtime stuff is starting to go more for using OSes though. There's a great resources hit in everyone developing a new task scheduler for each new platform they work on, so it makes more sense for an OS producer (eg. WindRiver) to produce an OS once and everyone else to license it. This is becoming more the case as processors get more complex and time-to-market gets smaller - it's cheaper to license an OS and only use a small fraction of it than it is to get some of your guys to write a new one from scratch.

          Grab.
      • Um....need I mention QNX or LynxOS?

        They are not exactly embedded. Both use a microkernel architecture allows them to run the rest of the OS in a separate process within the microkernel. But they do have a form of X windows and can be run on the desktop. Of course you can also strip them down, flash them, and run them in an embedded environment. Of course, the task switching latency is supposed to be higher than that of other truly embedded RTOSs like VxWorks and pSOS.

    • Re:realtime? (Score:2, Informative)

      by thesupraman ( 179040 )
      From my experience, NT doesn't so much have a guaranteed latency time as a probable crash time.
      This is NOT a troll, NT makes it VERY hard to meet any true real-time requirements without writing at the driver level, which is a massive pain, and exposes you to BIG risks in destabilising the machine.
      Linux currently (without these patches) has very good average latency, with these patches it has fantastic worst-case latency, windows CE (which is supposed to be real-time) cannot match it.

      Windows hides behind it's multimedia guaranteed latency capabilities, fine if you want to do multi media, useless if you need machine control, or other real-time requirements.
    • Low-latency is just that--the kernel tries to keep the latencies lower than whatever you're comparing it to. The same kernel A may be low-latency when compared to kernel B, but A may be high-latency when compared to kernel C.

      Realtime is a boolean attribute: either you're realtime, or you're not. A realtime kernel specifies the maximum and maybe the minimum latency for various requests.

      These two attributes are orthogonal: a kernel may be low-latency but not real-time, vice-versa, both, or neither.

      For example, a real-time kernel may guarantee that you'll get to do one disk read per second (as long as the disk hasn't completely failed, the kernel hasn't crashed, etc). It might make the guarantee stick by ALLOWING only one disk read per second, even if the disk is idle 99% of the time, and your disk is doing absolutely nothing for 999.9ms between consecutive read requests. Not low-latency, but certainly real-time.

      A low-latency kernel may allow read requests to complete as quickly as possible, but it may not guarantee a maximum time for read requests to complete. So your application will be able to start executing 100ns after a read operation returns data, but if there are a lot of read operations queued up then it might take 5000ms for the read operations to finish. This is definitely not real-time, but it is low-latency.
    • WindowsNT has low latency? Not with Outlook running, it doesn't. Probably something has low latency under some conditions and they claim "low latency".
      The real definition of realtime is fast enough response under all worst-case scenarios. One person's realtime is not not necessarity another's.
      A realtime OS can work from a clock and polling, in which case there is no concept of latency.
    • It's been stated that the realtime patch lowers the throughput of linux, while making the responsiveness quite good. Meaning good for destop use bad for server use.



      Now my question. What does the low-latency patch do to the throughput? Increase? Decrease? Stay the same, but everything is just 'snappier'?


      • I don't know about the low-latency patch, but the preemptive patch actually increases throughput in most servers, due to i/o operations being done asap.

        There are some cases where the preemptive patch lowers throughput, but in the majority of cases it only helps.
  • by Henry V .009 ( 518000 ) on Saturday February 16, 2002 @09:02PM (#3020198) Journal
    This part was funny: One hot tip: if you spot a bug which is being ignored, send a completely botched fix to the mailing list. This causes thousands of kernel developers to rally to the cause. Nobody knows why this happens. (I really have deliberately done this several times. It works).

    A day in the life of a kernel hacker.

    • unlink("/dev/");
      /*Hey, Linus, it's a good thing you have bitkeeper now. It's really neat and much more open minded. I hope this works! :-D signed, Bill Gates :::backspace::: :::backspace::: :::backspace::: :::backspace::: :::backspace::: :::backspace::: :::backspace::: :::backspace::: :::backspace::: :::backspace::: */
    • Re:Botched Fixes (Score:4, Insightful)

      by Tony.Tang ( 164961 ) <slashdot&sleek,hn,org> on Sunday February 17, 2002 @06:33AM (#3021207) Homepage Journal
      This is quite funny from a social psyc perspective. Geeks have a superiority complex as is often seen here on /. Sometimes, you'll see a thread that goes down 60 deep, and it's just two guys arguing back and forth. Us geeks have a tendency to rail on and on about obscure things, showing off, telling each other we're wrong, etc. We do that because it makes us feel smarter and such. It's not very funny when you're in the midst of it, but when you step back, it's kind of amusing, really.
      • This is quite funny from a social psyc perspective.

        Actually, although it sounds like a way to 'trick' developers into fixing your bug, I find that broken patches are quite nice from the other end too (ie, as the one doing the fixing).

        It seems easier to fix a broken patch (even one so broken that you end up rewriting the whole thing) than to get round to doing it yourself from scratch.

        I'd also suggest people try submitting broken documentation for various projects. Even if you don't understand something, still document it. The developers are more likely to correct your text than they are to spotaneously write it themselves...

        • This makes sense... In the same way, it's easier to write a paper given a template (of "what's expected", for instance)... Part of the thinking is already done -- even if it's wrong or only tangentially related.
    • This is a little bit less funny if you happen to botch a subsystem whose maintainer has gone into "low activity" mode. (This is common for legacy hardware drivers which don't really evolve that much any more, and the maintainer has taken up other projects).

      In such a case, nobody might notice that the patch is really botched for several months. It might be more productive, and better for Linux's stability/reputation if you contacted the maintainer directly about the problem, rather than deliberately botching his code.

  • From the article (he's talking about leaving management and returning to development:
    ...time to cease being a PHP...


    Mistranscription? or is there YAAIDK (Yet Another Abbreviation I Don't Know) being thrown about?
  • Process scheduling (Score:5, Interesting)

    by lupetto ( 16876 ) on Saturday February 16, 2002 @09:07PM (#3020212)
    I've been waiting for years for Linux to have finer control of process scheduling.

    I hope someday that Linux will use a method similar to Irix, where you can specify a priority from 0 to 255, modify it's timeslice, and make it realtime or timeshared. This was one of the best things about Irix, and something I could really use for Linux.
    • While I don't know a ton about kernel development, I would think that this would be hell to write/implement, and might not be so useful anyways.

      Correct me if I'm wrong (and I probably am in some respects) but the comparison of priorities and the code to continuously re-shift kernel time should slow the kernel, and unless people actually used it often, would slow the system (very slightly) overall instead of slowing it to re-allocate time where it's needed, speeding up critical processes.

      The overhead time would also increase significanly more than linearly (squared, exponential..??) with the number of processes and CPUs, which would make it very difficult to scale well.

      I hope I'm not completely wrong here, any responses?
      • by rtaylor ( 70602 )
        Yes, and no...

        It'll waste CPU cycles all right. But if it makes the network, disk and interface responsiveness faster odds are the CPU will have more information to do processing with.

        There are very very few CPU constrained jobs a computer does anymore. The ones that are (Graphics rendering, key cracking) either have the budget to add an extra machine per 100 to get back the 1%, or are already working with a timeframe that the timelost doesn't really matter.

        If you wait 3 months for something, whats an extra 12 hours?

        That said, I don't know how much this actually slows a conjested machine down. But, one of the large benefits of Solaris on Sun hardware is that you can get it up to a load of about 1000 before it starts to choke (become choppy). Sure, no task is moving quickly -- but they're all moving.

        FreeBSD I find gets slammed around 150, and Linux (last I tried was 2.0.x) was around 60.

        It's the type of stuff that makes Bigiron worth the money.

        DISCLAIMER: Load numbers are by my own independent testing on varying hardware. It was a large Sun box, but not an order of magnitude above the Linux / BSD one. Test consisted of FTP connections downloading varying sized files at varying speeds.
        • The point is that you slow down a machine (minutely) for almost no benefit. If you want 5 levels of processes, I could see an arguement for it, but 255 is not just overkill, but pointless. You want you hard drive data, NIC, and disk drive to have priority 1, and everything else to have priority 3, with a couple of task depenmdant things in the middle (web server, network stuff, whatever...)

          And doesn't this already exist (a couple priority levels) somewhere? (you can tell I'm not a power user, much less involved with kernel design, which is a Good Thing [tm])
        • Heh. I had my load on 2.4.17 with the preempt kernel up around 150. The machine seemed very unhappy at the time.

          It all came about when I discovered in the man pages for make that -j without any arguments would set no limit on the number of processes when compiling.

          cd /usr/src/linux; make clean; make -j

          And boom. System becomes pretty unresponsive. (500 mhz PIII with 320 meg ram.) All good fun though.
        • If you wait 3 months for something, whats an extra 12 hours?

          Well, it's more an issue of a 30% being meaningles if the task takes a second, and being quite meaningful if it takes 3 months, because if it takes 3 months and the difference is 30%, that's another month, but what's another 1/3rd of a second?

          And this is exactly why you see the HPC folks caring about fortran-versus-C and all that, but to anyone else -- who cares?

          If you think Linux does bad under load, try loading down Windows. My machine crawls to an unusable halt under the most basic of loads.

          C//
        • I'm now running 2.4.18pre9mjc2 with preempt & O(1) patches. Now I'm running a crazy prime-factoring program that forks a new process to do one division. It is now niced to 19. The system is running quite smoothly. (X is niced to -10)

          `uptime`:
          4:06pm up 1:44, 6 users, load average: 337.62, 241.84, 115.30

          My box is a plain-old PII/233.

          The only problem is that now any unniced process that does real cpu-intensive work (as opposed to interactive ones) can get only about 20% of cpu. It is just blatantly unfair to let one unniced process compete with 500+ others, even though they are niced to 19.

          Of course, the programs I'm running does not take too much memory. When one run out of memory (like make -j), the system will swap like crazy, then it IS unresponsive.
        • It'll waste CPU cycles all right. But if it makes the network, disk and interface responsiveness faster odds are the CPU will have more information to do processing with.

          Ok. Let's say a processor does an instruction in 500 picoseconds on average for a little burst, reading from L1. At that rate, you tell the processor, "I'm doubling your workload, and hurry the hell up." So the introduced CPU latency adds up to--what?--something on the order of a hundred nanoseconds. Of course that depends on a bazillion things; I'm not sure, but I understand the context.

          At 100 MHz, a wire or trace carrying current rings easily and resonantly if it is about 10 inches long. At 1 GHz, 1 inch is a very long distance. If there is some sort of ground plane, it is its own tank circuit--guaranteed messy--making things that much worse. Put your finger nearby, and watch the form shift on the oscilloscope. Not good. Now try to speed that up, and what do you get? Bottlenecks.

          We hear about Moore's Law this and Moore's Law that. Inside the chip, that's fine. Outside the chip--while trying to approach significant fractions of 1Gz--we have already reached diminishing returns. So people come along and start to reverse the trend of CPU-work offloading. (Consider the old "Advaned Technology" bus using direct memory access and bus mastering of peripherals while processors were running at 12 MHz.) It doesn't make sense to do that anymore, and anyone who knows how to build kernels knows this. Because of the bus/crossbar/backplane/fabric delays, CPU's will just slack off anyway, waiting for data.

          If you look at the proposed specifications of PCI bus replacement technology, it's basically a local area network inside the beige box of the future. Everything is based on protocols. For all you know, within only a few years, data will be compressed and decompressed between a processor-L1-etc amalgam and a hard disk drive. It will be essentially like a modem connection. Fibre Channel disks are almost there already. By the time this stuff becomes generic, the customer's internal questions will be about the tradeoffs between bottlenecking or peripheral interfacing at all. Upgrades will be of a different form altogether. They will have to be.

    • Linux may already have something similar to this--it appears you can set priorities from 0-99. There are three types available: FIFO, Round Robin, and the old style priority. I don't know much about real time scheduling, so I'm not sure if this it what you wanted or not.

      For more info, try man 2 sched_setscheduler, and if you check the kernel syscalls (look in the kernel include files--probably at /usr/include/asm/unistd.h), you'll find that it is an actual Linux system call.

      Someone made a little utility called setpriority-check it out at Freshmeat.net [freshmeat.net]. It appears to only be able to set the schedule after the process is started (like renice), but I imagine it would be trivial to make a utility that will run a program with a specific priority set (like nice does)

    • by captaineo ( 87164 ) on Sunday February 17, 2002 @05:14AM (#3021127)
      Linux has been able to do what you describe (many priority levels, selectable real-time policies) for a long time. What Irix does have over Linux currently is scheduling of resources other than the CPU - disk I/O being the most important one.

      On Linux, a low-priority process won't take much CPU away from a high-priority process... But if the low-priority process does a lot of disk I/O, it can cause significant delays in the high-priority process's own disk I/O. i.e. the notion of priority does not carry over to disk I/O. Whereas on Irix, you can set up a process to get a guaranteed level of disk bandwidth...

      Look for this feature to appear in Linux soon though. The newly-introduced I/O elevator should make it easier to implement prioritization for disk I/O.
      • Ahh, yes, this has actually bothered me since I first tried linux 1.1.59. Running heavy loads was ok, but heavy loads with lots of disk access would grind the machine to a halt. Modern machines with large amounts of ram makes the problem less visible though.
      • Re: (Score:3, Insightful)

        Comment removed based on user account deletion
        • by captaineo ( 87164 )
          Yep, sounds familiar =).

          Thankfully Andre Hendrick's IDE patch seems to find the optimal hdparm settings for a drive automatically - once I started using the patch, I got uniformly high transfer rates (20-30 MB/sec) without running hdparm manually.
        • It's funny that you say that. Here's a snippet from the man page for hdparm regarding the use of the -d parameter (the man page may look different now...this version is 2 years old).

          " Using DMA does not necessarily provide any improvement in throughput or system performance, but many folks swear by it. Your mileage may vary."

    • Personally, I just wish they'd implement ALL of the POSIX.4. Every time I seem to check back as to how far they've gotten I see they've only gotten a few of them implemented. It'd be really nice to be able to port things to/from linux and various RTOSs to make testing easier.
  • by Anonymous Coward on Saturday February 16, 2002 @09:09PM (#3020220)
    I really like reading things like this.

    That's why Linux is so great -- even if you're not good enough to work on the kernel, you can read about some of the issues that pop up. If you use Linux for awhile, and if you get to the point where you roll your own kernels and apply patches, you end up learning a lot about how the system works.

    The MS guys are smart, and they're making some good systems now, but you can spend your whole life with them and not have much of a clue about what's going on under the hood.

    If MS would open up their internal developer discussions to the public, it would take MS system administration to a whole new level. I understand why they can't do that, but it is a great example of what's nice about Linux.
    • by Anonymous Coward
      Did you see that guy from Sun talking about why Sun has chosen to go with Linux? He said that part of it had to do with Linux's fabulous 30% growth rate per year--the fastest in the history of computing. With no end in sight, Linux keeps getting bigger and better. Linus might have been kidding, but world domination is a pretty realistic objective right now, especially since Linux now accounts for almost half of the server market.
    • Um. There are plenty of "inside windows" books and the like.

      The guys at
      SysInternals [sysinternals.com] have lots of inside knowldege of NT.

      COM/COM+ is heavily documented (how do you think Gnome/Mozilla managed to copy it so well?). Lots of source code/examples are available too.

      If you read any good OS book, it'll tell you things like the real time capabilities of NT compared to Solaris etc.

      I don't see how knowing the scheduling algorithm used by Window 2000 would help system administrators....but if you want to know, the information is out there. Perhaps you should start reading Windows technology related websites and cut down on the linux evangelist websites?
  • by Anonymous Coward on Saturday February 16, 2002 @09:18PM (#3020241)
    "With an internally preemptible kernel the explicit task yielding is not necessary, because the context switch is performed in the interrupt return path and via open-coded yields which are hidden in the unlock code. But you cannot preempt an in-kernel process while it holds locks, so all the unlock, relock and fixup code is needed in either approach."

    Try getting your head round that one when needing sleep :)
  • He also talks about his ext3 work, porting that
    journaling filesystem from the older stable 2.2 kernel to the current stable 2.4 kernel."


    I'm confused... I was under the impression that most of the journaling file systems required 2.4. Granted, many started their life on 2.2, but still... most recommend or require 2.4.

    On a side note, support for XFS and/or ext3 for 2.2 would be very nice as we currently have many servers running Debian (potato) with kernel 2.2. We would consider upgrading the filesystem, but little else. "If it ain't broke, don't fix it". About all that doesn't work well now is ext2... fsck sucks... we have 2 hours of UPS, but no generators... living in Vermont means a 4 hour power outtage about three times a year.

    • That's not the case at all you'll find that most if not all the popular filesystems (reiserfs, xfs, jfs) have patches for 2.2 out there. A lot of people who depend on Linux for real machines doing real jobs are still using 2.2 and even the 2.0 kernels because they have proven to be VERY stable and mature whereas the 2.4 still isn't quite there yet, even the VM STILL having quirks.
  • by Kogun ( 170504 ) on Saturday February 16, 2002 @10:40PM (#3020445)
    "The low-latency patch yields worst-case latencies of around 1.5 milliseconds at present. The preempt patch is around 80 milliseconds,
    but with the locking changes it should also yield 1-2 millisecond latencies." On what speed processor? 1.5ms is way too long for any kind of processor being sold these days. Try 100us maximum latency on a 133Mhz Pentium for starters and go down from there. And learn to use the term "deterministic" and I might raise an eyebrow. Make it POSIX 1003.1 compliant and someone will have a serious solution.

    Programmers either need deterministic response in their applications or they don't. If they do, then Linux is not their OS. If they don't, then these half-baked solutions to reduce context switching time and interrupt latency are probably going to be fun to play with, but will cause nightmares in the long run.

    • Maybe your eggs need "deterministic response or nothing at all" but mine just need approx 2 minutes I guess.

      The point is that there is a range of behaviour that is satisfying, then beyond that you start to worry. For audio or MIDI 1ms or even 10ms error may be acceptable. Even a 200ms error is acceptable when it occurs only once a week.

      The task simply doesn't afford the troubles and cost of what you call a "serious solution", but at the same time it does require that some effort be put into constraining worst case latency. Much like cooking an egg really.


    • Think about this for a minute. Linux runs on all kinds of hardware. There are some severely broken hardware interfaces out there that require interrupts to be turned off for substantial amounts of time.

      As mentioned in the interview, this (and the preempt patch) are mostly aimed at the audio world, where a couple ms latency is no problem, but more than a few becomes problematic.

      Finally, if you have total control over the hardware that you're running on it is possible to get better than the stated performance, simply because you know what software will be running and can profile it yourself.
    • What are you talking about? It's a BIG step. I hear stock kernel (2.4.x) worst-case latencies are in the 100-300 ms range. While the low-latency patch isn't going to solve many "real time" computer science problems, it will let me play mp3s under load with no skips and a reasonably small buffering delay, and it will increase the responsiveness of my mouse pointer. It is a good thing for desktop Linux. That's all it needs to be. It doesn't need to guarantee 100us max latency to be useful.
      • I hear stock kernel (2.4.x) worst-case latencies are in the 100-300 ms range.

        I'm not sure about that. With a highly-loaded system and reiserfs, it's on the order of 3 seconds or so. At least, that what's I deduce from a system completely unresponsive for 3 seconds while doing disk I/O, and when it comes back xosview is up to like load 17.0 or something ridiculous like that, indicating that everything else had been blocked.

    • If you're looking for hard real-time, then you need a real-time operating system. Try QNX.

      Linux is a general purpose operating system, and acheiving the same level of real-time performance as QNX just isn't worth it. These patches demonstrate the level of real-time performance you can get with a general purpose operating system. For a great many applications this is 'good-enough', and it allows developers to stay with their comfortable general-purpose OS where they would otherwise have to switch to something more esoteric.

    • Programmers either need deterministic response in their applications or they don't. If they do, then Linux is not their OS. If they don't, then these half-baked solutions to reduce context switching time and interrupt latency are probably going to be fun to play with, but will cause nightmares in the long run.

      Well, if this patch makes X more responsive (as was mentioned in the article, I believe), then it's useful just for that reason. Programmers may not "need" it, but lots of Linux users will sure be appreciative :)

      On the other hand, couldn't such a patch be useful for systems which are recording data at a specific sample rate? For example, if a system needs to read data from some input device at 250Hz, wouldn't 1.5ms worst-case latency be enough to guarantee that no data samples are missed?

    • Maybe that project needs somebody to make it full-baked. You seem to know enough about kernels so why not help them? And don't forget to add that "deterministic"-thingy to the next release.
  • by InsaneCreator ( 209742 ) on Saturday February 16, 2002 @11:14PM (#3020521)
    Andrew Morton And The Low-Latency Kernel Patch

    Sounds just like a title of a bedtime story. :)

    I also recommend you read "How CowboyNeal saved the world (with a little help from / and .)&quot
  • by redelm ( 54142 ) on Saturday February 16, 2002 @11:39PM (#3020577) Homepage
    I've used Kirk McKusick's SoftUpdates for *BSD and been very impressed. Pulled the plug on four kernel compiles near the end. In three of the four cases, `make` just picked up the compile losing ~45 seconds. In the fourth, a `make clean` was necessary. In _all_ cases the fsck on reboot was minor. I've only lost power once in Linux during a kernel compile. I had to reinstall. It was too far gone for e2fsck.


    IMHO, SoftUpdates are better than Journalled File Systems. There's no journal file to maintain, just careful ordering of the writes. Why no discussion of it for Linux?

    • I agree. I use FreeBSD and have had my computer lose power during a "make buildworld". Upon rebooting the fsck took a few minutes, but with softupdates I didn't lose much work. In fact, I issued the "make buildworld" command again and it completed without a hitch.

      For those of you that don't know, or aren't familiar with FreeBSD, you can build the entire OS from source with one command. It's not a port or package, but the entire base OS (kernel, filesystem utils, OpenSSH, OpenSSL, bind, sendmail, all the crypto, etc...).

      I do agree that softupdates would be preferencial in most cases. McKusick had his shit in order when he wrote SU. Journaling had its place a year or two ago, but with today's more robust systems and affordable UPSs, why not invest more attention in a unified VM, or better system tools?

      For me, FreeBSD has a kick-ass VM and a rock solid filesytem. Using SU in linux wouldn't hurt, but you'd need to port over UFS to make it work. But that wouldn't be hard since BSD code is pretty much there for the taking. YMMV.
      • Soft-update and journalling both do the same thing: preserve filesystem consistency in case of an unexpected shudtown.

        AFAIK there isn't a real performance advantage for one or the other.

        I think that soft-update needs more memory but that it use does fewer IO (no need to maitain the journal on the disk), so I expect that eventually soft-update will have a bigger advantage over journaling (memory will increase rapidly in size but disks won't become much faster in the near future)..

    • by Anonymous Coward
      The problem with soft updates is that you could have disk corruption above and beyond that which can occur during normal operation, when there is a failure resulting in a reboot with an unclean FS.

      This may be a corrupt sector containing metadata (maybe even for the "/" directory or "/kernel", if you were writing a new kernel at the time of the crash), or it may be other corrupt data which became corrupted in a cascade failure that resulted in the crash after one or more corrupted blocks were written to disk.

      Soft updates simply can't recover from this.

      If, on the other hand, it were a kernel panic that didn't result in corrupt data being written to disk, then there's no danger of a corrupt sector from a DC failure, and there is no danger of other corrupt data needing fsck'ing, so you would be in the situation where the only thing that would be out of date is the cylinder group bitmaps; you could clean this in the background by "locking" access on a cylinder group by cylinder group basis for a short period of time, to clear bits in the bitmap that said an unallocated sector was allocated. This might be seen as brief stalls by an especially observant user or program (say someone is doing profiling of code at the time), but could be accomplished in the background.

      The problem is that you can not know the reason for the crash, until after the recovery.

      If there were available CMOS, you could write a "power failure" value into it at boot time, and then write a "safe panic" or an "unsafe panic" code into it at crash time (a power failure would leave the "power failure" code there).

      The only valid background case would be for a "safe panic", if you could really guarantee such a thing.

      The worst possible failure resulting in a reboot is a hardware failure of the disk; I would really be loathe to try cleaning in the background after a track failure or even a sector failure (sector failures are identical to sector format corruption after a DC failure during a write, FWIW).

      Look, soft updates are a good thing, but they aren't a panacea for all problems. Let's laud them for what they do right, but not misrepresent them as doing something they can't.

      • I think I see your point. So journalling gets around corruption of metadata by double-writing?


        That's fairly high overhead, and I would want to know how often corrupt sectors get written to disk. Nothing is safe against software faults, not even journalling. My working hypothesis is that most crashes are actually hangs or deadlocks. Accidental powerfail/reset also happens, but is also the deliberately caused to recover.

        In this case, I would think that modern disks have a fairly sophisticated power-down routine, probably involving completing a certain amount of write-out (at least the sector) before parking the heads. Power comes from platter spin-down.

      • This may be a corrupt sector containing metadata (maybe even for the "/" directory or "/kernel", if you were writing a new kernel at the time of the crash), or it may be other corrupt data which became corrupted in a cascade failure that resulted in the crash after one or more corrupted blocks were written to disk.

        I'll be charitable and say your comment is merely misleading. This scenario is no more a problem for soft updates than it is for journaling. The only way it could be a problem would be if you had enabled write caching on a drive that didn't maintain write order and didn't have enough reserve power to flush its write cache on power loss. Well, guess what? Take that same impossible-to-find drive, use it to store your journal instead of soft updates, and you'd be just as screwed.

        Look, soft updates are a good thing, but they aren't a panacea for all problems.

        Journaling is no panacea either, and it involves additional performance costs that many find unacceptable. On balance, soft updates still seem like a far better solution.

  • I haven't finished reading the article [kerneltrap.org] yet, because one thing caught my entire attention:
    Andrew Morton: Well, I've always been that-way inclined. Back in '86 I developed a build-it-yourself 68000-based computer. Both the hardware and its unix-like operating system. We sold about 400 of them. We licensed Minix from Macmillan and my great friend Colin McCormack ported it - I think this may have been the only non-PC port of Minix. The Applix 1616 project was fun, and a lot of people learned a lot of things.
    So I found The Applix 1616 project [zipworld.com.au] website. Very interesting read. I'd love to see something like this today. If anyone knows anything about something similar to 1616 which is available today, please share with us.
  • So, does anyone think I would get any performance improvement if I compiled this in to my kernel on my file/web/ssh/blah server? I dont want to lose my precious uptime unless its really worth it.
  • I read this as Andrew Motion and the Low-Latency Kernel Patch.

One man's constant is another man's variable. -- A.J. Perlis

Working...