Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Non-Deathmatch: Preempt v. Low-Latency Patch 178

LiquidPC writes: "In this whitepaper on Linux Scheduler Latency, Clark Williams of Red Hat compares the performance of two popular ways to improve kernel Linux preemption latency -- the preemption patch pioneered by MontaVista and the low-latency patch pioneered by Ingo Molnar -- and discovers that the best approach might be a combination of both."
This discussion has been archived. No new comments can be posted.

Non-Deathmatch: Preempt v. Low-Latency Patch

Comments Filter:
  • co-op! (Score:2, Funny)

    by FigBugDeux ( 257259 )
    whats wrong with cooperative multitasking?
    • Re:co-op! (Score:2, Insightful)

      I hope you're kidding! In a co-op system, each program monopolizes all the resources of the machine until it voluntarily gives up control. If the instruction to return control to other processes is preceded by instructions which could result in an infinite loop or wait state, the machine is then inaccessible. Situations like this exist in preemptive systems as minor annoyances. For example, if you ssh from and openBSD machine to a sshd server and then pull the network cable from the sshd server, the tty on the openBSD machine will stop responding to input. Since the BSD kernel is preemptive, you can switch tty's and just kill the process of the frozen tty. In a coop setup, the key sequence to switch virtual consoles would be ignored and you'd have to reboot the machine.
      • Uh. If the BSD kernel were not preemptive, then the sshd design would have been different, so that the situation you're describing would not have occurred. Plus most (all?) coop MT systems still have something called "hardware interrupt".

        The diffference between preemption/coop is very much like the difference between using threads and blocking I/O (i.e. On a general purpose system, the cost of preemption is masked by a) the high speeds of today's processors, and b) the fact that although preemption is locally suboptimal, it yields better results globally. Still, for precisely circumscribed domains, coop will deliver better performance.

    • indeed, preemptive mt just feels to... rude!
    • Hmm. Let me count the ways:
      • Windows 3.0
      • Windows 3.1
      • Windows for Workgroups
      • Windows 95
      • Windows 98
      • Windows 98SE

      The list goes on...
    • <karma hoard>
      This is quite a good thing when doing ports, e.g. Wux applications from Unix to Windows PDF here [dmst.aueb.gr]. Particularly insightful is "Chapter 3.2.2 Operating Systems Differences". This document can also serve as Unix to Windows porting 101. I wonder if the Win 3.1 stuff they are talking about is still valid in the non-MSDOS WinME,NT,2000,XP ?
      </karma hoard>

  • by BrianGa ( 536442 ) on Thursday March 21, 2002 @11:26PM (#3205568)
    Check out this [gatech.edu]
    comprehensive guide to Linux Latency.
  • by CathedralRulz ( 566696 ) on Thursday March 21, 2002 @11:44PM (#3205646)
    The article doesn't mention this, but something some folks aren't aware of is that MontaVista is a serious linux partner with IBM. If the technologies described in the white paper can be merged, then the real effect can have a more significant impact in the embedded application/ PowerPC products from the World's 9th largest corporation.
  • Oooooh (Score:4, Funny)

    by NiftyNews ( 537829 ) on Thursday March 21, 2002 @11:51PM (#3205678) Homepage
    Don't miss the thrilling link to the debate on whether it is PreemptAble or PreemptIble... [linuxdevices.com]
  • by thesupraman ( 179040 ) on Friday March 22, 2002 @12:15AM (#3205760)

    One thing not mentioned so far is that one of THE largest scheduler latency problems comes from the driver for a PS/2 mouse, a very common item to be found plugged into servers which have no need for it. By removing the PS/2 mouse (and driver..) a significant latency improvement can be gained!

    It's a pity that most USB mice don't seem to provide quite the quality of use as the PS/2 items (although this is probably also a driver issue)

    Loy latency can be an advantage, but it is important that the cost of the lower latency is not an increase in total load, as in reality the lower latency does not provide a large gain in performance for most desktop or server roles, but rather is a measure more often used in real time systems, which it can make the difference between a system working or not.

    An example of this is in an ignition ECU for a V12 engine at 6000 RPM, a (pair of) plug is firing every 1/600th of a sceond (1.66ms), but the accuracy of the firing even must be in the order of 10us, which is not yet reachable be any 'standard' unix kernel, but quite easy to get on a much simpler ECU (I use an SH-2 at 24 MHz) than you would notmally find using a true real-time kernel.

    With some developments is may be possibly for a form of linux to reach this level, which would be fantastic, as a LOT of time is spent in embedded development providing 'operating system' level functionality around the actual application code, and with embedded processors getting faster, and memory getting cheap, embedding *nix has become much more of a possibility.
    • by clone304 ( 522767 ) on Friday March 22, 2002 @12:30AM (#3205799)

      Honestly, I'm not trying to troll you here, but why would you WANT to run a *nix kernel on hardware that's responsible for engine timing? Especially when you apparently already have tech that works. Is the idea just to make vehicles that much harder for people to maintain? The day my mechanic keeps a sysadmin on duty so he patch my buggy Linux 4.5.3 ECU is the day I put a gun to my head and pull the trigger.

      Aside from that. Way to inform the masses.
      • Forget about mechanics. Just let us telnet into the engine and we can fix it ourselves.
      • Having a Linux kernel doesn't mean having a Linux userland -- and almost all bugs that actually affect the end-user are userspace bugs, or kernel bugs that are only visible with a big userspace (ie. networking bugs -- if you don't need networking you don't compile it into your kernel, and so it never hits you). If you're doing a deeply enough embedded system, you don't need that userspace, so you don't *have* most of that userspace (heck, you can do without any of it if you like)... and there's only one application that the kernel runs (yours!) and only one set of hardware it runs on (yours!) and so it can be very thoroughly tested. Even when updates are needed, flashing a new kernel in (which happens to have an initrd with the one and only application it runs) isn't exactly rocket science, and certainly doesn't need a sysadmin.

        Do you really think that companies doing their own proprietary kernels come up with something more reliable? For the most part, they don't -- but once a deeply embedded system is out the door, the rule is that it'll Just Work the way it did in the lab, because for that embedded system nothing (the hardware, the software, the input, the output, whatever) ever changes from what it was designed and tested on -- or if it does, you've got bigger problems.

        That's less true for less deeply embedded systems... but those are where something like Linux is even more appropriate.
      • The day my mechanic keeps a sysadmin on duty so he patch my buggy Linux 4.5.3 ECU is the day I put a gun to my head and pull the trigger.

        If you use an early development kernel in a production engine, you deserve what you get.

    • The performance tests that various people have run with these patches seem to show improvements in throughput on servers. The guess is that this is because the machine will respond more quickly to the completion of I/O (which is, of course, is the slow step on most applications), so it essentially arranges tasks so that the next I/O operation can get started sooner.

      Of course, you'd presumably get a higher load, since you're doing your task in less time, which means that the processor is busy more of the time.

      Additionally, latency is important for interactive tasks on desktop machines, because it determines how responsive the system feels under load. The user's experience is determined more by how quickly the system responds to simple requests (moving the mouse cursor, drawing menus, etc) than by the milliseconds place on more complex tasks. Furthermore, the user will instantly notice if sound or video is not handled on time.
      • What you say about desktop responsiveness being more of an issue for these patches than server applications is accurate. And I agree that mouse and audio responsiveness is very noticable to user (if audio skips, very loud popping is usually heard). However, with video there is a little more wiggle room if the decoder handles things gracefully. Even if interrupted for 2/30s to even a tenth of a second as an isolated incident, you can probably get away with it. Certainly 1/30th of a second will almost certainly not be noticed (single frame drop), 2/30ths (2 frames) probably will slip under the radar. 3 frames dropped and then you start getting into a perceptible skip if there is a decent amount of movement in the scene. Of course, 1/30th of a second is much higher than most all latencies you see even under the unpatched kernel.

        I personally haven't been able to tell between the pre-empt kernel and traditional, they both give the same 'feel' on the desktop level.
        • Under regular loads, yes.

          Under high loads, when playing audio, the difference can be quite remarkable.

          I'd be curious to know under what kinds of loads the tests documented here were run.
          • I was just making blanket statements about visual percetption and how many frames can be dropped without being noticed. It's anecdotal, but I have seen some clips that can drop up to 40% of frames without much noticed (very low motion video). But if there is only enough of a glitch to interrupt a frame or two of video, then it will be smoothed over by the sensory system without it ever being missed, while with audio, any interruption will result in a waveform that has an undefined derivitave, a crack in the audio. Under very heavy loads, you'll drop more than 1-2 frames per second, you'll have probably up to 40-60% frame drop, and for almost any content that would indeed be noticable.

            In any event, low latency is good, and my entire point is irrelevant except as a nitpick.
        • That depends on whether you are talking consumer or professional video.

          If your target is professional video, the customer will scream loudly if even a single frame is dropped (and they will check), or if any fields are reversed, etc.

          Though for these applications, you really need guaranteed real-time performance anyway, for which a RTOS or (possibly) Real-Time Linux is more appropriate. It could also possibly be done with a kernel thread...
        • That's interesting; I thought people would notice a dropped frame just because it would mean that one frame would be shown twice as long as any of the rest, so everything would stop for a moment and then jump forward to the right place. Of course, traditionally when a frame gets dropped, the person sees a blank frame or static or something unrelated to the scene; people don't notice these at all, because the brain is wired to ignore them (because you blink and so forth). But having a frame update dropped means you'll see the previous frame for twice as long, which appears as a change in the motions of objects, which the brain is tuned to see.

          I'm not sure exactly how many frame updates you can miss without people noticing, but it's got to be a lot fewer than frames you can blank without anyone noticing, especially if it's not consistent.

    • Interesting what you say about USB mice, in my experience a USB mice provides smoother use than PS/2. I don't have PS/2 at all in my kernel anymore. With my Logitech MouseMan+, it came as USB with a PS/2 converter. I had to use it because I played with BeOS and BeOS didn't like it unless it was on the PS/2 port. In any case, when using a USB port instead of PS/2, the mouse can be polled much more frequently, allowing for more precise movement (particularly of interest in, say, first person shooters). Plus, the ability to plug and unplug, and plug my mouse into any USB port I feel like is nice.
      • I certainly hope that your USB mouse driver is interrupt driven. There shouldn't be any polling going on, as that would be an enormous waste of system resources. Similarly the origional post was dubious at best, since PS/2 ports can also generate interrupts. A still mouse should use *zero* system resources. USB does have higher throughput then PS/2, so theoretically the mouse could send updates much more frequently then the PS/2 equivalent, and that would account for smoothness.
    • Can you explain why the PS/2 mouse causes latency problems? Even when you're not moving it?
  • by ChaosDiscord ( 4913 ) on Friday March 22, 2002 @12:16AM (#3205761) Homepage Journal
    Most RTOS's prioritize device interrupts so that important interrupts ("shut down the reactor NOW!") are serviced first and lower priority interrupts ("time to make the doughnuts") are serviced later.


    Clearly, most RTOS designers have their priorities backwards.



    Mmmm, donuts.

  • The low-latency patches had a maximum recorded latency of 1.3 milliseconds, while the preemption patches had a maximum latency of 45.2ms.

    A 2.4.17 kernel patched with a combination of both preemption and low-latency patches yielded a maximum scheduler latency of 1.2 milliseconds, a slight improvement over the low-latency kernel. However, running the low-latency patched kernel for greater than twelve hours showed that there are still problem cases lurking, with a maximum latency value of 215.2ms recorded. Running the combined patch kernel for more than twelve hours showed a maximum latency value of 1.5ms.

    So after only 12, the low-latency patch degraded by an ungodly amount (1.3 -> 215.2 ms)!! and even the combined patch had a 25% degraded performance(1.2 -> 1.5 ms)!

    Embedded systems must have a very high uptime, it's not acceptable to reboot the machine every day to maintain performance. Many embedded systems require a downtime of less than 5 minutes per year. That doesn't give you much time to reboot the machine just for performance issues.

    • It has got nothing to do with how often the machine is rebooted and everything to do with the frequency of latency increasing events.

      The event which caused the 215ms even probably only happens once or twice per day. Perhaps it was some weird code path that the LL patch didn't touch, or some unlikely combination of events occuring "simultaneously"
    • by tempest303 ( 259600 ) <jensknutson&yahoo,com> on Friday March 22, 2002 @01:02AM (#3205849) Homepage
      What's up with the degrading performance?

      It's called a bug - they'll figure it out. ;)

      it's not acceptable to reboot the machine every day to maintain performance

      Hey, it worked for NT4 admins, why not for embedded developers? *rimshot*

      Sorry. But seriously, anyone looking for hardcore low latency in Linux right now for systems that need that buzzword compliant "five 9's" should probably wait on using Linux until it's READY. Make no mistake, with as much interest and developer hours as are going into it, Linux WILL make it into this market, and it will succeed; it's merely a matter of time. (and hell, at this rate, it may not be long...)
    • by Fluffy the Cat ( 29157 ) on Friday March 22, 2002 @01:04AM (#3205857) Homepage
      So after only 12, the low-latency patch degraded by an ungodly amount (1.3 -> 215.2 ms)!!

      You're misinterpreting the figures. After a short benchmarking, the worst figure recorded was 1.3ms. After the machine had been left up for 12 hours (thereby allowing there to be much more time for something odd to crop up), the worst figure recorded was 215.2ms. That doesn't mean that the performance had degraded - it means that over the course of those 12 hours, something happened that caused latency to peak at 215.2ms. It might be something that happens once every 12 hours, for instance.
      • It might even be something that happens just as often on with the combination patch as with the low-latency patch, except the combo got lucky.
        • by SurfsUp ( 11523 ) on Friday March 22, 2002 @03:07AM (#3206113)
          It might even be something that happens just as often on with the combination patch as with the low-latency patch, except the combo got lucky.

          If you'd actually read the article you'd know that this can't happen with the preempt patch + low-latency, not unless a spinlock gets jammed, then you have much worse problems. The preempt patch takes care of scheduling events that occur during normal kernel execution (and it does this much more reliably than the low latency patch) but since preemption isn't allowed while spinlocks are held, it can't do anything about latency due to spinlocks. This explains the apparently worse performance of the preempt patch - you're just seeing the spinlock latency there.

          The low latency patch breaks up the spinlocks with explicit scheduling points, which is pretty much the only approach possible without really major kernel surgery. That's why the combination works so well. In fact, the parts of the low latency patch that aren't connected with breaking up spinlocks aren't doing anything useful and should be omitted. The worst-case performance won't change.
    • by steveha ( 103154 ) on Friday March 22, 2002 @01:11AM (#3205878) Homepage
      Well, to be precise, the worst-case value "degraded". And I'm not sure "degraded" is the correct word. With a huge torture load put on the kernel, during a 15-hour interval, at least once the latency value was 215.2 msec. This could mean that there is a possible latency condition that happens under torture load approximately once every 15 hours. It could also mean that after 15 hours, your chance goes up so it could happen much more often than that, but we don't know that. It could even mean that there is a possible 215 msec latency condition that happens under torture load approximately once every 30 hours, and it happened to occur during the first 15 hours.

      Embedded systems must have a very high uptime, it's not acceptable to reboot the machine every day to maintain performance.

      True that. Which is why the author of that article made the point that combining the two patches is the best way to go, since he ran the torture test for 15 hours and it didn't go over 1.5 msec even once.

      Note that for many purposes, a worst-case latency of 1.5 msec is ample. I don't think there is any version of Windows that goes that low; I don't even think BeOS (legendary for low latency) goes that low. As the author noted, if you are driving a chemical processing factory or something like that, you need hard realtime and you should use something other than Linux kernel 2.4.x!

      steveha
  • Measuring latencies? (Score:3, Interesting)

    by acordes ( 69618 ) on Friday March 22, 2002 @01:18AM (#3205894)
    Here's a question. How do you go about doing fine grained measurements of these latencies? Every time I've tried doing timings with Linux I've had problems being able to get accurate, fine grained results.
    • What kind of accuracy do you need? I find that gettimeofday() will give approx. ~5 ms accuracy (non-root, nice 0). Interfacing directly with the RTC on x86 gives you approx. ~10 usec accuracy (root, SCHED_FIFO, nice 0).
  • Anyone know if Redhat is planning on offering lower latency kernel RPM's for those of us who are loath to patch and recompile a kernel JUST to try something new out to see if we like it. Its kind of nice if I can drop in a quick RPM, decide weather I like it, and THEN compile a trimmed kernel properly if need be.

    I'm just lazy. :-)
  • ...tency
  • I'm missing on Clark Williams' paper how the patches influenced the OS overhead.
    • Do you have a good suggestion on how this would be measured? Please, I'd love to try it out myself with the O(1)+preempt patches...

      • Given the test-setup of Clark Williams the natural way to measure the impact of those kernel patches IMO would be to summarize the results of the different tests for each kind of test independantly.

        The differences between the unpatched kernel and the patched versions than would give an estimate how the patches influence the overall system behaviour.

        The CPU-intensive tests are then an indication what influences the patches have on pure OS overhead, while the IO-intensive tests show the (hopefully positive) effects of latency reduction.

        A rough and unscientific measurement method, but easy to implement (e.g. just counting the number of times each particular test was run during the test-period). Of course all the tests would influence each other, but that's not in opposition to the heuristic/stochastic test-setup and well in tune with the goal to improve the real-time behavour without changing the overall effiency of linux negatively.

        Just a dummies way to measure immeasurable things.
  • On the related note Next Generation POSIX Threads (NGPT) [ibm.com] made it into the official kernel (2.4.19-pre3). Kudos to the team.


    So many very good things happen to Linux kernel! I am impressed.

    • On the related note Next Generation POSIX Threads (NGPT) made it into the official kernel (2.4.19-pre3). Kudos to the team.

      This must really piss off the folk who have worked hard on LinuxThreads (the thread support in glibc) over the years! A number of LinuxThreads shortcomings are due to kernel support not being forth coming...

      Interesting times ahead...

  • QNX vs. Linux (Score:1, Informative)

    by Anonymous Coward
    I would like to see similar response graphs for QNX or other RTOS's for comparisons sake.

    Anyway IMHO to make a real assesment for any 'hard' realtime tasks is much too much effort for most of the readers here. =)

    But here are more white papers than you can shake a stick at....

    http://www.ece.umd.edu/serts/bib/index.shtml
    • Re:QNX vs. Linux (Score:2, Interesting)

      by Anonymous Coward
      "Interrupt and process latency

      All times given below are in microseconds (sec).

      Processor_______Context____Interrupt Latency
      Pentium/133_____1.95_______4.3
      Pentium/1 00_____2.6________4.4
      486DX4__________6.75_______ 7
      386/33__________22.6_______15

      With nested interrupts, these interrupt latencies represent the worst-case latency for the highest priority interrupt. Interrupt priority is user-definable and the interrupt latency for lower-priority interrupt sources is defined by the user's application-specific interrupt handlers. "
    • Ask and ye shall receive...

      Thorough evaluations of several RTOS's can be found here [dedicated-systems.com]. Free registration required.
      For those that choose not to read the report, the worst-case scheduling latency for QNX is about an order of magnitude better than a preemtive Linux kernel (actually Windows CE 3.0 appears to be considerably more deterministic than Linux too).

      More importantly the latency in QNX is deterministic, while the scheduling latency under Linux (IIRC) grows linearly with the number of threads in the system.

  • Well, Windows CE 3.0 provides 50 ms latency response time running on a 166 MHz Pentium.
  • I never realized there was competition between the two. I did hear the low-latency crowd claim that it was lower risk due to its less invasive nature. However, that hardly says anything about the performance of either approach - or that they should be mutually exclusive.

    Two wrongs doesn't make a right, and vice-versa (but two Wrights make an airplane).

    • Linux has a long-standing "kernel code is not preemptible" tradition and quite a bit of design hinges on that presumption (see /usr/src/linux/Documentation/smp.tex for example). So naturally there has been some resistance against recklessly applying preemption whereever it appears (not) appropriate.

      In particular Alan Cox (perhaps not coincidentally also the author of the document referenced above) has been hammering on the fact that preemption can break code that needs to consider the timing requirements of the hardware itself; simply put, you do not want code that is busy interfacing with a device to be preempted for possibly long periods of time because the device might not have that kind of patience. So any kind of preemption patch would need to address these issues, and you end up touching lots of files just like the low-latency patch does.

  • by hazzzard ( 530181 ) on Friday March 22, 2002 @05:03AM (#3206336)
    I am using a low-latency kernel on my notebook at the moment and I can report the following behavior:
    • In X (KDE), I can move windows around, load programs, webpages etc. without my MP3-player ever beginning to skip.
    • When doing massive file IO, the MP3-player begins to skip. tar cvzf file.tar.gz bla/ is still ok, but cp -R bla1 bla2 causes massive skipping.
    • When I use the notebook as a samba server, things get worse. Still, massive skpping. Additionally, the samba becomes dog-slow and even the mouse falls asleep.
    • Often times, after such phases of heavy load, the skipping and sound-distortion remains! So I have to reboot the machine from time to time to enjoy music again. Closing the player and opening it again is not enough. Somehow, under heavy load things get messed up enough to make a recovery impossible.
    I did use the preemptive patch before, but performance under heavy load was even worse and the similar problems with rebooting occurred. I was using kernel 2.4.12 for preemptive and I am using kernel 2.4.17 currently. The machine is a Celeron 466 with 128 megs of ram. Still, the low-latency patch makes sense for machines that are primary for playing MP3s and reading emails (that's what my notebook is), but not for desktops with a wider variety of usage patterns. It's just not ready for primetime yet, but it's promising and fun!
    • Are you scheduling your MP3 player at a higher prioritary (*) than say, your cp command? It is very important that you do this. While this might not fix the massive skipping it could improve things a great deal. Given, there are still lots of problems with the disk I/O subsystem in Linux that neither the preempt or ll patches fix right now so the occasional skip under heavy load is guaranteed, especially since consumer grade audiocards are not really catering towards low latency.

      <plug>

      AlsaPlayer comes with a --realtime switch that enabled SCHED_FIFO scheduling for the audio thread, this eliminates skipping for most people, but requires the binary to be run as root (or be suid root)

      </plug>

      -adnans

      (*) scheduling something at a higher prio usually requires running the program suid root, or calling renice(8) as root with the programs pid. A better method is to sudo the nice command for the given user so he/she can nice the app at the start
    • There are four things you could look at.
      • VM swapping. Linux VM still has the propensity to hold onto cached stuff even after phys Mem has been depleted. Get more RAM or tune your VM.
      • Audio drivers. There are basically three alternatives for sound on Linux, the free OSS drivers, the for-pay OSS drivers, and the ALSA drivers. I've had very good results with the for-pay OSS drivers, but you should collect them all -- the sound distortion remaining even after closing/opening the sound device is a definite driver problem, possibly related to brain-damaged hardware getting DMA transfers wrong.
      • Disk speed or controller funkiness. Are there any known issues for your chipset/drive?
      • CPU speed. A mobile celeron 466 is simply not that fast, although your problem seems to be more I/O than CPU related.
    • Reports like these make me curious. I have a desktop with the exact same specs. Yet I have never experienced any skipping when using XMMS. Whether I am doing large file copies, compilation... nothing. This is under both Mandrake and Sorcerer. Is there a variable here which is being missed? I have never needed to renice anything.

    • Your problem likely has little to do with the way the kernel handles CPU scheduling, but rather the interrupts from your hdd controller and soundcard. You might be able to test this to a degree by putting the mp3s into a ramdisk and trying the same operations.

      I had exactly the same kind of mp3 skipping problems on my desktop machine for a long time, until a friend introduced me to hdparm, which allows you to set your IDE controller to DMA mode. Try it. The difference is breathtaking. Combine that with 2.4.19-pre2 and preempt and you have a smoooth desktop system :)

      I just put the following in a startup script, and everything's highly froody.

      hdparm -d 1 -W 1 -u 1 /dev/hda

    • Once I was a kernel-patching enthusiast, but a recent incident changed this.

      When I tried a kernel patched by some arbitrary person (i.e., not -ac or -aa, but rather someone appearing just several times a month on lkml) so that it contains all kinds of new things like the new scheduler (which DOES help) and preempt, I found that the system was very stable after two days of desktop use. Then I decided to try `nice -n 30 make -j' PRBoom, and the system is still as responsive as ever. I was happy and posted on Slashdot about how slick it is just after the compilation ended successfully. Wanting to verify my results, I `make clean' and retried. Guess what? Solid lockup without any logs. I hit the reset button and tried the compilation again in console mode with only `top' running. 2 out of 3 times the system locked up solid, Alt-Sysrq just shows messages without doing real things.

      I then rebooted and ran the kernel for another two days with constant fear that it may mysterically lock up at any minute. Luckily it didn't, and I then replaced the kernel with a -aa one, without most of the new features, but at least it is rock solid, hasn't crashed since. I think I will add the preempt (or ll) patches only after Marcelo or A.C. or A.A. incorporate it, or when I decide to do some kernel hacking.

      So my advice is that try arbitrarily patched (I mean putting patches together by yourself or by someone else who did not test it extensively) kernels only in deep-kernel-hack-mode, just like when you try a kernel in which you modified several lines by yourself. After all, many of us has been spoilt by Linux's stability, and our nerves are not quite prepared for a lock-up when facing a screen with WindowMaker (rather than Win98) on it.

  • Why the suprise? So many time I find that the best solution to a problem is a compromise between two or more extreme solutions.
  • by Anonymous Coward
    To paraphrase the great philospher Hobbs, Linux is theworst of all possible worlds.
  • by mikera ( 98932 ) on Friday March 22, 2002 @06:12AM (#3206423) Homepage Journal
    Some very thoughtful analysis clearly went into this. It's well written up with explanations that hit the right balance of having the key technical details but focusing on the big picture of how to make applications run better under Linux. As a casual follower of kernel development, I now understand far more of the trade-off than I used to.

    I always think that tests and write-ups like this are a great way that people can contribute to Linux development without having to hack the kernel directly. There's no substitute for a thorough testing to help you improve your designs and theories.

    Nice job!
  • by Chops ( 168851 ) on Friday March 22, 2002 @06:44AM (#3206486)
    From the article:
    Back in early November 2001, I started following a discussion between two factions of the Linux kernel community ... There were two main factions, the preemption patch faction and the low-latency patch faction. Both groups were very passionate (i.e. vocal) about the superiority of their solution.

    Er... while some misinformed folks have in fact been arguing over "which approach is better," both Robert Love [iu.edu] (preemption) and Andrew Morton [iu.edu] (low latency), the authors of the patches, have agreed since before November that a hybrid approach is probably correct, and it seems to me (though I don't speak for them) that they're faintly embarassed [iu.edu] at the number of True Believers who have stepped up to champion one or the other's side in this nondeathmatch. They're attacking different sections of the same problem.
    • some misinformed folks have in fact been arguing over "which approach is better"

      They are both wrong. The correct solution is to remodulate the preemption and vent the latency through the Bussard collectors.

      -
  • I wrote an article about low-latency for audio
    applications under Linux, you can read it here if interested:

    http://linux.oreillynet.com/pub/a/linux/2000/11/ 17 /low_latency.html

    It's more of a hands-on article, tells you how
    to do it yourself with Andrew Morton's patches.
  • by sagei ( 131421 ) <rlove@rBOYSENlove.org minus berry> on Friday March 22, 2002 @09:02AM (#3206789) Homepage
    First, I wanted to give my view of the results - what they mean and what that means. Note there are multiple notions of latency performance. Average latency and worst-case latency, among others, but those are most important. This test measured worst-case latency. Both are important - for user experience average case is very important and for real-time applications worst-case is very important.

    It is not a surprise the low-latency patches scored better, or that the ideal scenario was using both. The preemptive kernel patch is not capable of fixing most of the worst-case latencies. This is because, since we can not preempt while holding a lock, any long durations where locks are held now become our worst-case latencies. We have a tool, preempt-stats [kernel.org], that helps us find these. With the preempt-kernel, however, average case latency is incredibly low. Often measured around 0.5-1.5 ms. Worst-case depends on your workload, and varies under both patches.

    Now, the results don't mention average case (which is fine), but keep in mind with preempt-kernel it is much lower. The good thing about these results are that it does indeed show that certain areas have long-held locks and the preempt-kernel does nothing about them. Thus a combination of both gives an excellent average latency while tackling some of the long-held locks. Note it is actually best to use my lock-break [kernel.org] patch in lieu of low-latency in combination of with preempt-kernel, as they are designed and optimal for each other (lock-break is based on Andrew's low-latency).

    So what is the future? preempt-kernel is now in 2.5 and, as has been mentioned, Andrew and I are working on the worst-case latencies that still exist. Despite what has been mentioned here, however, we are not going to adopt a low-latency/lock-break explicit schedule and lock-breaking approach. We are going to rewrite algorithms, improve lock semantics, etc. to lower lock-held times. That is the ease and cleanliness of the preemptive kernel approach: no more hackery and such to lower latency in problem areas. Now we can cleanly fix them and voila: preemption takes over and gives us perfect response. I did some lseek cleanup in 2.5 (removed the BKL from generic_file_llseek and pushed the responsibility for locking into the other lseek methods) and this reduced latency during lseek operations -- a good example.

    So that is the plan ... it is going to be fun.

    • I've played around with both low-latency and preempt, and preempt "feels" smoother to me. Overall, the combination I like the most is preempt + lock-break.
    • Now, the results don't mention average case (which is fine), but keep in mind with preempt-kernel it is much lower.

      The results do mention the average latency. For the vanilla kernel it is 88.3 microseconds. For the low-latency patch it is 54.3 microseconds. For the preemption patch it is 52.9 microseconds. Is 52.9 much lower than 54.3?

  • Low latency, pre-emptive. All nice and good. However, what I really want is to get a super-fast connection between my database server and my application server. How much will the lower latency patches affect the throughput, given that I operate in multiple small queries? (No way around it, at the moment. So please don't flame (too hard))

    Will Ethernet devices, TCP/IP stacks and the lot become more responsive? Will MySQL/PostgreSQL/SapDB/Oracle/DB2/Interbase be able to execute a small query even faster? How much?

    Actually, I hope to measure this sometime not too far into the future!
    • It is something I want to look more closely at. I like the pre-emptive + lock-break kernel on the desktop so I tried it on a small server. With Postgreslql the initial results were that with a standard 2.4.18 kernel the database was slightly faster. A rebuild of the database from the raw data was about 3 seconds quicker over a 4 minute time period. The search I was doing normaly takes about 14 seconds. Using the pre-emptive kernel the search takes 16 seconds.
  • When will this become stable enough for major distros to start using it?

    I don't think anyone doubts that this is a good approach. But, both patches are still being worked on right now. And while the preempt patch has already been merged with the 2.5 kernel, the low-latency patch is still nowhere to be seen.

    I certainly think that this would indeed have a great impact on Linux Multimedia, but not until a company like RedHat or SUSE is willing to include it at least as an optional kernel. The reason is, a vendor doesn't have to support patches until they include it in one of their pre-compiled kernels.

    This might not mean much to home users, but a company will not rely on an unsupported feature.

    Like it or not, business still drives the industry.

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...