Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Software Linux IT Technology

ArsTechnica Explains O(1) Scheduler 295

geogeek writes "The recent release of Linux's 2.6 kernel introduces a number sweeping improvements. These can be hard to understand without a background in programming. This week's Linux.ars examines the improved scheduler for an enthusiast audience, concisely explaining its nuances and the practical effects of an O(1) design."
This discussion has been archived. No new comments can be posted.

ArsTechnica Explains O(1) Scheduler

Comments Filter:
  • Ars' Piece (Score:5, Interesting)

    by WarpFlyght ( 586558 ) <warpflyghtNO@SPAMtelltales.net> on Thursday December 25, 2003 @06:03PM (#7809832) Homepage
    I read this piece yesterday, and while it did "dumb down" the basics (as the first poster noted), I thought it did a very good job of putting it all into a nutshell that those of us not as familiar with Big-O and schedulers in general might easily understand. For Linux.Ars' format, I thought it was of appropriate length, and had enough detail to "belong." I'm sure there are more detailed writeups on the O(1) scheduler in place in 2.6. Does anyone have any links?
    • Re:Ars' Piece (Score:5, Informative)

      by sfe_software ( 220870 ) on Thursday December 25, 2003 @06:50PM (#7809993) Homepage
      Regarding "dumbing it down":

      There is one exception to this load balancing rule -- some special processes may be fixed to a certain runqueue. This attribute is called "thread affinity;" a thread is simply another name for a process.

      I think it over-simplified this a bit. Granted, Linux doesn't make all that much distinction between a "thread" and a "process" as compared to (say) Windows, but the distinction is still important.

      Otherwise, it's a decent article, and gave me some insight to one of the major improvements in 2.6 (which I've yet to try out).

      I remember when the "pre-empt" patch came out for 2.4, before it was integrated into the kernel. For a desktop (or otherwise interactive, like my media box) system, it really made a major perceived improvement in performance (while *actual* performance wasn't affected; it was just about priorities and scheduling, making it "feel" more responsive).

      The O(1) scheduling algorithm likely improves this further, and is enough to tempt me to spend XMas evening upgrading... these things are important, especially on a desktop system.

      Windows XP, for example, isn't all that great in this area, and often one process can too easily slow things down. This is even further emphasized when all the processes you're working with are actually the main "explorer.exe" process; eg, you do something in Windows Explorer that blocks (spin up a CD drive) and everything else (the "desktop", task bar, etc) all become unresponsive...

      I enjoy seeing such improvements in the Linux kernel. The pre-empt patch (later integrated into the mainstream kernel) made a drastic usability improvement (especially when the box is also a web server, MySQL server, firewall/router, etc) and I couldn't see a Windows box handling similar tasks without becoming unusable (note: not unstable, just not practical to use when responsiveness is hindered).

      I think the kernel team is on the right track with this, specifically for desktop applications (though it does help in many other applications as well; anything interactive, really, even over the network/Internet).
      • Re:Ars' Piece (Score:3, Insightful)

        by moonbender ( 547943 )

        For a desktop (or otherwise interactive, like my media box) system, it really made a major perceived improvement in performance (while *actual* performance wasn't affected; it was just about priorities and scheduling, making it "feel" more responsive).

        It doesn't just "feel" more responsive, it is more responsive! It's a matter of definition, but I'd certainly call that an improvement of performance. Taking from the article (or comp.sci classes), while the throughput remains equal - I suppose it's even reduc

        • It doesn't just "feel" more responsive, it is more responsive!

          I agree, though what I meant to say was that it feels faster. It certainly is more responsive, but it's not any faster.

          I recall when the pre-empt patch first came out, it was tested on a variety of systems. On a desktop-like environment it made a huge difference (responsiveness) but in a server environment, it made very little difference.

          So yes, it certainly is more responsive to any interactive application, but for sheer CPU loads it didn't
          • I agree, though what I meant to say was that it feels faster. It certainly is more responsive, but it's not any faster.

            What you are probably trying to say by that, is that, while throughput is the same, latency has been decreased.

      • Re:Ars' Piece (Score:5, Interesting)

        by Sloppy ( 14984 ) * on Thursday December 25, 2003 @10:45PM (#7810724) Homepage Journal
        Windows XP, for example, isn't all that great in this area, and often one process can too easily slow things down. This is even further emphasized when all the processes you're working with are actually the main "explorer.exe" process; eg, you do something in Windows Explorer that blocks (spin up a CD drive) and everything else (the "desktop", task bar, etc) all become unresponsive...

        I actually saw this behavior once on a customer's machine. I closed the CD door, and the machine went unresponsive for several seconds. I was shocked. Then I realized: millions of people think this is normal. And those millions of people live in 2003, not 1983. It was like some sort of Lovecraftian revelation.

      • you do something in Windows Explorer that blocks (spin up a CD drive) and everything else (the "desktop", task bar, etc) all become unresponsive...

        Yes, this is MAJORLY bad. I've had whole machines become totally unusable (mouse moves, that's about it) simply because of a dirty CD put in the drive, or a buggy drive (perhaps the mechanism is loose, etc). Windows wants to down the entire machine to devote all computational power to waiting for the CD drive! Linux is not off entirely scot-free, but it's defin
      • Re:Ars' Piece (Score:5, Insightful)

        by spectecjr ( 31235 ) on Friday December 26, 2003 @05:02AM (#7811766) Homepage
        Windows XP, for example, isn't all that great in this area, and often one process can too easily slow things down. This is even further emphasized when all the processes you're working with are actually the main "explorer.exe" process; eg, you do something in Windows Explorer that blocks (spin up a CD drive) and everything else (the "desktop", task bar, etc) all become unresponsive...

        The kernel hasn't slowed down in that case; it's the GUI layer which has slowed down. It hit a bottleneck.

        Interestingly, it would appear that Windows has used an O(1) scheduler in Windows NT and up since at least NT 4.0 (which came out in 1996).

        It's also interesting to note that it has the same priority-boosting for IO threads that Linux has.

        "Windows NT also uses additional priority adjustments based on other events, such as momentarily boosting thread priority when it returns from an I/O call"

        Detailed explanation of the NT4.0 scheduler [microsoft.com]

        Easier to read version of the above [google.com]

        Part 1 of a slightly more indepth (but in some places inaccurate)* article [winntmag.com]

        Part 2 of the article [winntmag.com]

        *The article makes it sound like it's O(N) in places; it's not. The scheduler is using a set of priority lists, which are kept in-order at insertion time when a thread exits its context, by the thread inserting itself at the end of the list for its priority.
    • Re:Ars' Piece (Score:4, Informative)

      by ikewillis ( 586793 ) on Friday December 26, 2003 @02:47AM (#7811543) Homepage
      I'm sure there are more detailed writeups on the O(1) scheduler in place in 2.6. Does anyone have any links? http://67.163.90.104/~jim/scheduler.html Unlike the Ars Technica article which seems to be nothing more than an overview of how schedulers work in general, this article goes into great technical detail about the actual implementation detauls of Linux's O(1) scheduler.
  • Multi core proc's (Score:3, Insightful)

    by niko9 ( 315647 ) * on Thursday December 25, 2003 @06:05PM (#7809839)
    With all the recent talk about multi core processors, will this sort of thing be relegated to hardware; or will there still be a need for software scheduler?
    • by be-fan ( 61476 )
      A multi-core processor appears as multiple processors to the software. Even SMT processors appear as multiple processors to the software. So the software scheduler is needed to juggle the available processors among the threads competing for them.
    • Re:Multi core proc's (Score:5, Informative)

      by autopr0n ( 534291 ) on Thursday December 25, 2003 @06:23PM (#7809908) Homepage Journal
      With all the recent talk about multi core processors, will this sort of thing be relegated to hardware; or will there still be a need for software scheduler?

      Well, only if you want to run only as many threads as you have CPUs. The windows machine I'm using right now has 37 processes, some I'm sure with multiple threads. I suppose you could have a hardware scheduler, but I don't think it would be a good idea. Scheduling doesn't take much time (certainly not this O(1) jobby), so you would lose a lot of flexibility without a lot of gain.
      • Well, only if you want to run only as many threads as you have CPUs. The windows machine I'm using right now has 37 processes, some I'm sure with multiple threads.

        Did a quick check here, 54 processes, 461 threads. Something tells me I won't see a computer with 4-500 CPU cores in my desktop, ever. Not to mention that if you did, I'd want some to be more important. I'm sure many of those threads are just sitting there waiting for input, e.g. GUI threads...

        Kjella
    • I'm not sure I understand how a hardware scheduler would work. It would need to keep track of the process's state, including virtual memory and registers, no? I don't know how the hardware would do that without the OS's support. You would need a software scheduler to manage these things; otherwise, it seems like you'd be limited to one process per CPU. Correct me if I'm wrong, though.
      • Well, the 80386s and on have some simple instructions to allow you to do fiddle with the current task state. IIRC, Windows does not use these anyhow because the processor was neither robust nor fast at task switching. It takes a ton of cycles (or at least it did, somewhere in the 1-2 hundereds) to flip to a different task state on a processor. At least, that's what my processor manual says.

        And yes, it would still require OS support, because the processor would have no idea when the current task is copmle
    • ...we could have a standardized hardware scheduler chip that would be uber-fast. Unfortunately, they wont standardize, and to ice the cake there's no truly 'optimum' scheduler for all systems and user loads, so it looks like you're SOL.
  • initramfs (Score:2, Interesting)

    I setup a small computer today with 2.6. It boots from an initrd but I noticed during bootup the kernel will pause for a few seconds with:

    checking if image is initramfs...it isn't (no cpio magic); looks like an initrd

    I couldn't google much info on it. Anybody know more about it? Or how I can stop the kernel checking for it.
    • Re:initramfs (Score:3, Interesting)

      by njdj ( 458173 )
      checking if image is initramfs...
      I couldn't google much info on it. Anybody know more about it?


      The short answer is that the purpose of this message is to help people familiar with the kernel diagnose problems which prevent completion of bootup. Your system works, so ignore it. Preventing the message is more trouble than it's worth.

      If you are interested, there is a little more info here [linux-mag.com].
  • by ctrl-alt-elite ( 679492 ) on Thursday December 25, 2003 @06:07PM (#7809849)
    ...can be found here [washington.edu] (PDF file).
  • Hmm. (Score:5, Interesting)

    by Anonymous Coward on Thursday December 25, 2003 @06:11PM (#7809866)
    I clicked on this article expecting to see an explanation of how it was that the O(1) scheduler worked, and by what tricks it was able to schedule in O(1) rather than having to spend extra time as extra processes are added, and what the real-life effects of such a situation are.

    Instead the article was just "This is what O(1) is. This is what a scheduler is. This is what "preemptive" means. The 2.6 SMP can load-balance effectively across many processors. I used this really cool mp3 player this week." and just about nothing else.

    Anyone want to explain exactly how it is that an O(1) scheduler is a difficult thing, and how exactly linux 2.6 achieves it?

    -- Super Ugly Ultraman
    • Re:Hmm. (Score:5, Informative)

      by AmirPC ( 735533 ) on Thursday December 25, 2003 @06:21PM (#7809904)
      I'm the author of that article..

      The original article was MUCH MUCH more in depth it was toned down because of the "glaze over" effect people were getting. I didn't have much say in the editorial process in the matter.

      I think this way atleast it appeals to a broader audience, not just the CS audience who would know what a binary min heap is if I were to use it.

      To each his own I suppose.
      • Re:Hmm. (Score:2, Interesting)

        by WarpFlyght ( 586558 )
        I'm sorry to hear that some of the article's technical details were edited out. I can understand why the editors would do so, though. Ars obviously isn't only targeting Slashdot readers, hehe.

        Would you be willing (or able) to post your original version of the article somewhere for interested parties to read?
        • Often times ARS articles are written at a much higher level than the /. crowd would prefer. Their earlier articles on CPU technology sold me on the site, although I haven't seen much of that kind of reporting in the last year or two. Basically, they *have* been playing down to the slashdot level.
      • sidebars??? (Score:3, Insightful)

        by mefus ( 34481 )
        Isn't that what a sidebar is for?

        Couldn't they have linked out into more in depth treatments, and saved the complexity for the interested (technically) reader while saving 'readability' for the non-technical ppl?
      • by vrt3 ( 62368 ) on Thursday December 25, 2003 @07:39PM (#7810164) Homepage
        In the article, you write:
        It is important to remember that a process can be stopped in the middle of its timeslice; this is the purpose of preemption.
        The purpose of preemption is not to stop processes in the middle of their timeslice; the purpose is to stop them when the scheduler decides to, not only when the process itself decides to give another process a chance to run. The latter is called cooperative multitasking, and is used in Windows 3.x and MacOS prior to X.

        Timeslices are used in order to implement preemption: a process stops either when it blocks (waiting for I/O) or otherwise voluntarily gives up the rest of its timeslice, or when it's timeslice is used up.

        About the article in general: I'm surprised about the dumbing down. Other articles on Ars, including but not limited to the series about processor technologies (e.g. the one about hyperthreading) are much more thorough and detailed.

      • Re:Hmm. (Score:3, Informative)

        A thread is not the same thing as a process. I hope that was the editors fault too.
      • Re:Hmm. (Score:5, Interesting)

        by Crypto Gnome ( 651401 ) on Thursday December 25, 2003 @10:06PM (#7810594) Homepage Journal
        Personally, I *really* like the idea of split-brained articles. (ie such a shame that wasn't done here)

        By that I mean the 'default'/'main' article having the not-too-geeky/JoeSixPack-reads-slashdot version, AND includes a link to "for those who want the gory details".

        That way the editors have an article which appeals to the vast majority of "reasonably intelligent" people, as well as catering to the hardcore techo-savvy crowd. (ie wrote a brainfuck program before breakfast)
      • I think this way atleast it appeals to a broader audience, not just the CS audience who would know what a binary min heap is if I were to use it.

        I have a modest suggestion for you to relay to your editor. There's this really cool feature of HTML that lets you put in a 'link' to another document. If your 'dumbed-down' article would use one or more of these to let the more technically-minded pursue that detail, you'd have the best of both worlds - an Executive Summary for PHBs and the nitty-gritty look

      • The original article was MUCH MUCH more in depth it was toned down because of the "glaze over" effect people were getting. I didn't have much say in the editorial process in the matter.

        Good lord .. considered syndication then? Because that article was absolutely empty of useful content, and I'd like to get to the actual meat of the article if you did indeed put technical detail in there.

        Seriously, demand a reprint. They are destroying your career over there by making you look about as knowledgable as a
    • That would be handy, I don't quite understand why O(1) is so hard, unless it is all related to priority scheduling checks. Anyone have links/explanation?
    • Re:Hmm. (Score:5, Informative)

      by KrispyKringle ( 672903 ) on Thursday December 25, 2003 @06:46PM (#7809979)
      this [slashdot.org] guy says that the scheduling itself is O(1) but the balancing is O(n). I don't know this myself, but that makes a whole lot more sense; according to the ars article, the scheduler simply keeps two stacks of processes--which is can seemingly do in constant time--but of course doing the load balancing is proportionate to the number of processes (times some constant--you remove constant multipliers in big-O--to do the context switching).
      • Re:Hmm. (Score:2, Interesting)

        by AmirPC ( 735533 )
        Balancing is never called O(1) in the article. I guess it was somewhat unclear but the scheduler itself is O(1) while load balancing is not constant. I guess it really depends on where you drawn the line between the scheduler and the rest of the kernel.
    • Re:Hmm. (Score:5, Informative)

      by windows ( 452268 ) on Thursday December 25, 2003 @07:03PM (#7810042)
      I'm not particularly familiar with the O(1) scheduler, but I know enough about it to give an explanation that I believe is accurate enough.

      There are two queues used in the Ingo scheduler. One contains processes that are ready to run. The other contains processes that have already run. The run queue is scanned, and when a process is found that can be run, it is automatically run without scanning the rest of the queue. When the process finishes its time quantum, it is put into the second queue.

      When the first queue (the active one) is empty, the queues are switched. This is just done by swapping the pointers to the queues.

      This scheduler is O(1) because it runs the first process it can run without scanning the entire queue as an O(n) scheduler would do. That's why it's O(1).

      If you want a bit more information on it, search for the "Ingo scheduler" on Google or your other search engine of choice. I don't see any detailed explanations of what happens, but I suspect as the 2.6 kernel goes into wider use, those will come.
    • Personally, I didn't feel it said much until you got here:

      "The scheduler keeps track of all these timeslices with two lists of processes. The first list contains processes that have yet to use their timeslice; the second contains those processes which have used their time. When a process uses its timeslice, the scheduler calculates a new timeslice (...). The process then gets inserted in the second list. When the first list becomes empty, the second list takes the place of the first, and vice-versa."

      That'
    • Well... lots of schedulers at least look at each process when context switches occur (e.g. to keep them sorted in priority order)--that's O(n).

      The article appears to say that 2.6 divides processes into a fixed number of priority levels, rather than having what could in theory be a large number of possible priorities--and that's what lets you use constant time methods.

      Once you set that fixed number, you can do things like keep an array of lists subscripted by priority level and then a single value whose bi
    • Anyone want to explain exactly how it is that an O(1) scheduler is a difficult thing

      Recently I have seen a lot of Dodge(TM) commercials where they say something to the effect of "it's got a Hemi". I presume it is not supposed to be some sort of self-parody, which leads me to think they must have some evidence that some people will somehow think if "it's got a Hemi", that must be something cool. Like "Intel Inside" or "hyperthreading". From a technical perspective, I do not think there is anything pa

      • Hmm, it's like this:

        O(1) Linux Scheduler : Linux 2.4::Hemi : Dodge's old standard issue V8.

        The Hemi is a design that has all kinds of horsepower compared to a regular V8 and tends to meet emission standards and stuff. So you really do get more power with a Hemi. Therefore your comparison is quite valid. :) And your conclusion of "don't care" is still just as valid. You just missed the "act on conclusion part", which is to go away.

  • Yay for Ars (Score:4, Interesting)

    by be-fan ( 61476 ) on Thursday December 25, 2003 @06:12PM (#7809872)
    While Ars definately isn't targeted at the same audience as, say, KernelTrap, its nice to see there are a few technology websites/publications that aren't dumbed down. I remember when Byte magazine used to publish articles detailing the PowerPC architecture, down to the level of registers and the types of pipelines in the first set of implementations. Compare this to the ZD rags, which are a hair away from calling the CPU the "brain" of the computer!
    • While Ars definately isn't targeted at the same audience as, say, KernelTrap, its nice to see there are a few technology websites/publications that aren't dumbed down.
      Are you joking? I guess you missed this [slashdot.org].
  • Interesting (Score:5, Informative)

    by Gary Whittles ( 735467 ) on Thursday December 25, 2003 @06:31PM (#7809939) Journal
    The scheduler is an important part of the kernel, so this is an important thing to understand. This article describes it well, but just to summarize a bit more technically, the benefits of the O(1) scheduler are:
    • Lean and mean (low overhead).
    • Scales well with the number of tasks (O(1)).
    • Scales well with the number of processors (O(1) for scheduling, O(N) for balancing).
    • Strong affinity avoids tasks bouncing needlessly between CPUs.
    • Initial affinity makes it likely that request/response-type tasks stay on the same CPU (i.e., good for LMbench lat_udp et al)
    BTW, It's good to see that the starvation and affinity problems that plagued the early versions of the O(1) scheduler have been ironed out.
  • by abhikhurana ( 325468 ) on Thursday December 25, 2003 @06:46PM (#7809981)
    That was kinda interesting though it doesn't really explain how they achieve the O(1)ness, and for example how did kernel 2.4 fare in comparison. Anyway, an O(1) scheduler is nice and all under low to moderate loads, but at high loads, when a lot of threads are running, the O(1) behaviour should enusre that the tasks are switching extremely fast, and as every context switch has some related overhead in terms of saving the context etc., that means that performance of the O(1) schedular should be worse than that of 2.4(where tasks are not switching as fast) in theory atleast. So is this really true?
    • They achieved O(1)ness by making it extremely rare to have to do linear searches of greater than one. Not impossible, but very rare.

      There are 140 possible priorities. Per processor. Each priority has its own pair of queues -- runnable-and-not-yet-run (active), and runnable-but-waiting-for-more-time (expired). (If a process isn't runnable, it doesn't even appear in the run queue; already an improvement over 2.4). In the 2.4 scheduler, the system task list WAS the runqueue list, and you had to inspect i

  • by andy666 ( 666062 ) on Thursday December 25, 2003 @07:06PM (#7810059)
    O(1) doesn't mean the time is constant, but that in the limit it is bounded by a constant. It can get enourmous, then shrink for large inputs. It could go to 0 in fact.
    • that is not entirely true it means that for the worst case input, there exist a constant k and a smallest number of processes np where the following equation is fulfilled:

      Time_To_Switch_n_proces(p)< k where p>np

      that is not exactly the same as your definition.
    • ...I don't think I've ever seen an algorithm where O(1) didn't imply that the time was constant. They concentrated on the important part, O(1) vs O(f(n)). Ok maybe you can construct a formula that is O(1) and non-constant, but it's not very relevant to this article or even computing time algorithms in general.

      Personally, I'd think that the difference between constant and constant time disappeared in the "dumbing down". Once you start talking about "being bounded in the limit by a constant", most people won
      • I don't think I've ever seen an algorithm where O(1) didn't imply that the time was constant.

        Umm, how about the O(1) scheduler presented in the article. When the queues are empty, there is the extra time of switching them. So if

        n is the time to pick a process from a nonempty queue

        and

        p is the time to switch the queues

        Then time n + p is bound, but frequently the scheduler takes only time n.
      • How about hash table insertion? The time it takes to do an insertion depends on whether there is a collision, but it is still bound by a constant that does not depend on the size of the input.
  • by Anonymous Coward
    Is this new? BSD and before that VMS used the same mechanism, priority queues for twenty years. The VAX architecture even had instructions to do this in 28 instructions. Here is some BSD code:

    static struct rq queues[32];
    static u_int32_t queuebits;
    int qi;

    qi = (current->priority >> 2);
    SetBit(&queuebits, qi);
    InsertQueue(&queues[current->priority ], current);

    qi = FindFirstSet(queuebits);
    if (RemoveQueue(&queues[qi], &current) == Q_EMPTY)
    ClearBit(&queuebits, qi);

  • by Anonymous Coward
    Let me explain: the Ars article says

    When it comes time to select a process, the scheduler finds the highest priority level containing available processes. It then selects a process from the desired priority level. Because the number of priority levels is fixed, the scheduler always takes a constant amount of time.

    Yes, but it does need to inspect all the processes within a priority level, right??

    As the total number of processes grows, the average number of processes within a priority level also grows

  • by Animats ( 122034 ) on Thursday December 25, 2003 @08:13PM (#7810276) Homepage
    For historical reasons, revolving around the "process table" design in PDP-11 UNIX, the schedulers in UNIX-like OSs were originally very dumb. The one in V6 UNIX was so bad that if three processes were compute-bound, the scheduler would stall.

    Mainframe schedulers have been O(1) for a long time. The one for UNIVAC 1108 EXEC 8 was O(1) in 1967. (And you can still buy a Unisys ClearPath server with that code in it.)

    The traditional implementation is a set of FIFO queues, one for each priority. For systems with only a few priorities, that's just fine. As the number of different priorities increases, you spend too much time checking the empty queues, so there are tree-like structures to get around that problem.

    Scheduling today is more complex because of cache issues. If you context-switch too much, you thrash in the CPU caches. On the other hand, everybody has so much memory today that paging is mostly unnecessary.

  • by blixel ( 158224 ) on Thursday December 25, 2003 @08:20PM (#7810298)
    When a process uses its timeslice, the scheduler calculates a new timeslice by adding the dynamic priority bonus to the static priority. The process then gets inserted in the second list. When the first list becomes empty, the second list takes the place of the first, and vice-versa. This allows the scheduler to continuously calculate timeslices with minimal computational overhead.

    Sounds like the rules to an AD&D adventure.
  • Would tell you enough to understand how it works. We already knew what O(1) means.
  • by ortholattice ( 175065 ) on Thursday December 25, 2003 @09:05PM (#7810435)
    I think this may be important:

    The benefits of preemption are great enough that the kernel developers decided to make the kernel itself preemptible. This allows a kernel task such as disk I/O to be preempted, for instance, by a keyboard event. This allows the system to be more responsive to the user's demands. The 2.6 kernel is able to efficiently manage its own tasks in addition to user processes.

    One of the most time-wasting, counterproductive things for me on Windows, and to a lesser extent on Linux, is that background tasks can essentially take over the computer because of disk thrashing. Sometimes (on Windows XP) I've seen it literally take minutes to bring a window to the front and repaint it while the disk is thrashing doing god-knows-what in the background. It is impossible to get any serious work done when that is happening. Setting my process to highest priority (on Windows) doesn't seem to help when disk thrashing is involved. I can understand how it might take a few seconds to swap another process out and bring mine in, but not minutes.

    Now I don't know exactly what's happening behind the scenes, but as soon as I click on a window to bring it in focus, I want everything on the computer to be devoted to doing just that until its done, regardless of what disk blocks from other processes are in the queue. My disk block requests should go in front of all of the others unconditionally. In other words I should see essentially no more latency than if the computer were completely idle in the background, other than just the minimum delay required to bring it back into memory if it's swapped out. For all I care just completely freeze all other processes on the computer until my request is finished, or at least give me the option to specify that it be done that way. My productivity should override everything else on my own desktop/workstation computer.

    So, educate me about what I may be overlooking here, and will 2.6 help this? I mean for Linux of course, not Windows, which may be a lost cause (:

    • What you're seeing is the problem with virtual memory implemented via paging. It works ok if you're only swapping a little bit, but if you have to swap a lot, its terrible since you're only doing it 4k at a time. Swapping a page to disk is has very high overhead, especially when you factor in the disk seek time. So when you're doing a lot of it, the system grinds to a halt. Pentiums support 4 meg pages, but when you get that big, you end up swapping stuff back in very soon after you swapped it out.
    • The computer *is* devoting everything to getting your window to the front.

      The application wasn't being used and so its memory was swapped to disk, now that memory is needed again so it needs to be brought back. But before it is brought back some space needs to be cleared which means writing some existing data from memory to disk (which is a slow process).

      The solution is simply to have enough memory installed for your working set of applications (which may mean installing more memory, using less applicatio
    • by borgasm ( 547139 ) on Thursday December 25, 2003 @11:52PM (#7810986) Journal
      Trashing occurs when you have to page out to virtual memory that is mapped to the hard drive. Your computer is spending more time moving data in and out of virtual memory than it is doing the actual computation.

      This usually happens when a process is in a loop, and repeatedly uses the same memory blocks, but can't keep them all in physical memory. Thus, it has to replace the pages every time it accesses/modifies them.

      Even if a process is at a low priority...if it comes on the CPU, it has the potential to trash memory and slow the works down.

      If an application is trashing, then you might be able to get away with scheduling it to not hit the disk. If the OS is trashing, you are SOL.
      • Even if a process is at a low priority...if it comes on the CPU, it has the potential to trash memory and slow the works down.

        Exactly, this is the problem. I want the ability the force all (non-critical, non-realtime) processes not associated with whatever window is currently in focus to freeze completely, until the process associated with the window in focus is idle, waiting for neither CPU nor disk. Or until the user brings another window into focus, in which case this behavior would immediately shift

        • by Halo1 ( 136547 ) on Friday December 26, 2003 @04:23AM (#7811715)
          Exactly, this is the problem. I want the ability the force all (non-critical, non-realtime) processes not associated with whatever window is currently in focus to freeze completely, until the process associated with the window in focus is idle, waiting for neither CPU nor disk.
          The problem with this scheme is that you can get deadlocks. Suppose the high and low priority processes share a block of memory, and that the high priority process is waiting for a spinlock in that block of memory to be cleared (basically a variable which is currently 1 and which has to be set to 0 by the process/thread that owns the lock) by the low priority process.

          In your scheme, the low priority process will never get a chance to run, because the high priority process is running all the time. So the high priority process keeps running forever doing nothing useful.

          In other words I am suggesting a system option to do this: if the process associated with the window that's in focus is not in the idle state, make everything else freeze.
          So if your email client was checking mail in the background, its connection will time out, file sharing will completely die, background printing fails due to communication time-outs, ...
          Or even this: an option to freeze everything else unless the process associated with the window that's in focus has been idle for at least x seconds (user settable). This latter would prevent swapping out while the user is typing.
          These kind of things were possible in older systems (e.g. Mac OS 9 and below froze every other application while you were dragging a window around), but modern operating systems cannot work that way anymore. You don't want downloads to stop when they're in the background, firewalls to stop working because they're never the frontmost application etc...
          • These kind of things were possible in older systems (e.g. Mac OS 9 and below froze every other application while you were dragging a window around), but modern operating systems cannot work that way anymore.

            It is too bad they weren't designed differently, so that they could work this way. With only a few realtime exceptions in a desktop machine - MP3 players, CDROM burners (which I think are flawed this way; maybe in the future RAM will be so cheap the necessary minimum data can be pre-buffered locally

            • "I think that the way most timeouts are designed is fundamentally flawed. A good timeout protocol will always have a way for the client (or server) to say, "I'm busy and can't send more data now, but I'm still alive, so don't disconnect me." Anytime there is a hard timeout, the number must be selected according to human judgment, and there will always be unanticipated situations where it is too short or too long. But I guess it's too late to change this in many existing designs."

              Your proposal appears badly
    • This is more of a virtual memory or disk scheduling issue than process scheduling. Even if a process has full access to the CPU, it still can't run if either it has code or data it needs that are not paged into RAM, or need to be loaded in initially from disk.

      Especially in Windows, it seems like if a process is hammering the disk in the background, other processes can be starved from accessing the disk. Windows seems especially braindead at times when multiple processes fight over the disk, and the disk is
  • I used to run BeOS on a dual 604e 9600 mac, it felt waaaay more responsive than my rh9 on a 128MB 500MHz laptop (dell inspiron 7.5k, which was pc of the year a few years ago). Thrashing is a big reason, I've got more running on linux I'm sure, but the BeOS' preemptive multitasking just made it feel like such problems which plague all windows systems at least, were nonexistent for BeOS.

    Is the new kernel going to reach/exceed this level of performance for the desktop? This would be a big thing for the corpo

    • The scheduler is unlikely to be the reason BeOS seems faster. The big problem you're dealing with on your Linux box is the RAM: unless you're running a lightweight window manager and a small set of fairly tightly coded applications you don't have nearly enough RAM for practical use. Either add more RAM, or get rid of all that "gui desktop" bloat.

      And if you think Linux is bad when it's thrashing, try running BeOS with insufficient memory. You'll wish you were back in Windows 3.11 on a 486/25.
  • ...why ARS gets so much press space here on SlashDot ? I mean, I also write a lot of articles about technical stuffs, but I never get any press space ?

    Has /. become a front for AT ?
  • As far as I can tell from the description in the article, there's nothing particularly new in this O(1) scheduler: what was described could have been the scheduler on just about any OS I can think of: one queue per priority level, pick the next available job from the highest level queue. Could have been taken from the Amiga manuals in 1985... either there's something missing from the description of the new algorithm or there was something really strange about the 2.4 scheduler.

    What did it used to do?
  • by javabandit ( 464204 ) on Friday December 26, 2003 @10:55AM (#7812500)
    The article was okay, I guess. Very, very elementary. I would assert that most schedulers are O(1), though. Its really nothing new. This article was probably good for someone who never heard of a scheduler or worked with one.

    Although the article was about scheduling, I would have like to have seen some more text about Big-O notation and why it is important to programmers. To me, any good programmer should have a good concept of Big-O notation, how it relates to traversing data structures (such as trees), et cetera.

    The article did not make a point as to why good schedulers use priority queues. Which is simple, really. All priority queues are O(1) for getting the next item in the queue. It has nothing to do with scheduling, but it is the nature of the priority queue itself... as a data structure.

    I would have liked to have seen the article in its whole form. I wonder if it went into more depth about Big-O, et cetera.
  • by IGnatius T Foobar ( 4328 ) on Friday December 26, 2003 @11:27AM (#7812696) Homepage Journal
    Windows XP, for example, isn't all that great in this area, and often one process can too easily slow things down. This is even further emphasized when all the processes you're working with are actually the main "explorer.exe" process; eg, you do something in Windows Explorer that blocks (spin up a CD drive) and everything else (the "desktop", task bar, etc) all become unresponsive...

    Unfortunately, even the best kernel process scheduler in the known universe would not fix this design flaw in Microsoft Windows, because what you're seeing is not a thread/process scheduling problem.

    As you correctly observed, many Windows programs are forced to make "shell calls" -- which doesn't mean the same thing it does in Unix, where you fork a /bin/sh and tell it to do something for you. It means that you send "events" to the currently-running EXPLORER.EXE with an idea of what you want it to do for you.

    The reason it bogs down when it's busy is because it is waiting on a single event queue. Mounting/unmounting of media, network lag, other processes sending/receiving messages, etc. all give EXPLORER.EXE more events to wait on. That's why it's a bottleneck, even when there's plenty of CPU to go around.

    It's an awful design, and it's one of those things that's fundamentally broken about Windows.

Your own mileage may vary.

Working...