Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux

Deadline Scheduling Proposed For the Linux Kernel 113

c1oud writes "At the last Real-Time Linux Workshop, held in September in Dresden, there was a lot of discussion about the possibility of enhancing real-time capabilities of Linux by adding a new scheduling class to the Linux kernel. According to most kernel developers, this new scheduling class should be based on the Earliest Deadline First (EDF) real-time algorithm. The first draft of the scheduling class was called 'SCHED_EDF,' and it was proposed and discussed on the Linux Kernel Mailing List (LKML) just before the workshop. Recently, a second version of the scheduling class (called 'SCHED_DEADLINE,' to meet the request of some kernel developers) was proposed. Moreover, the code has been moved to a public git repository on Gitorius. The implementation is part of a FP7 European project called ACTORS, and financially supported by the European commission. More details are available."
This discussion has been archived. No new comments can be posted.

Deadline Scheduling Proposed For the Linux Kernel

Comments Filter:
  • by natehoy ( 1608657 ) on Tuesday October 20, 2009 @11:24AM (#29808671) Journal

    Interesting story a professor of mine told: An old university mainframe was brought offline after decades of operation. A core dump was performed and investigation revealed that there was a process that had been waiting to run for close to 30 years. Somehow, its priority was set to be lower than the idle process and this particular machine did not have automatic escalation of priority in its scheduler.

    Actually, I think EDF would fix that, at least to an extent.

    Currently, prioritization is done based on, well priority. So if you took a job and set its priority to lower-than-idle, as you stated above, it will obviously never run.

    However, with EDF,you assign (if I understand it correctly) a deadline to each task rather than a priority. "Task X really should get done in the next 50 msec, but Task Y can wait up to 12 hours and no one's going to scream". On a normal peak-and-valley loaded machine, it'll work on the first deadline first, and if things queue up the low-"priority" ones will simply have longer deadlines. Ideally, the system has enough capacity to accommodate the longer deadline stuff "early" when it has time.

    However, on a heavily-peaked system, eventually those long-deadline jobs are going to get priority simply because their deadlines have been reached. So if you have a 50ms deadline job running every 20ms and those jobs start queuing up, then you submit a 12hour deadline job, in 12 hours that job will get the full attention of the system until it's done, because it has the earliest deadline.

    As with all things, this has its advantages and disadvantages. If that job that runs every 20ms is truly critical, then you're not going to like this scheme - eventually the low-"priority" stuff is going to come due and take precedent over your "critical" stuff because it's all based on what you asked for first, not how important each task was. On the other hand, this does prevent the very problem you describe - jobs running at such a low priority that they never get any lovin' at all.

  • Re:first post (Score:3, Insightful)

    by tepples ( 727027 ) <tepples.gmail@com> on Tuesday October 20, 2009 @11:34AM (#29808869) Homepage Journal

    Thanks PulseAudio, for exposing those damn drivers. I really hate it when audioservers simply work and just hide defects like this.

    There comes a point beyond which a clusterfuck needs a clusterfix. Should Linus Torvalds just have kept on using Windows 3.1 and slogging through hiding its defects instead of building Linux? I don't think so. Likewise, programs that need real-time behavior can just hide the fact that existing Linux kernels have few provisions for real-time process scheduling by just hogging 100% CPU. But the right answer is to improve the scheduler's support for real-time semantics, as seen in the article.

  • by sjames ( 1099 ) on Tuesday October 20, 2009 @05:23PM (#29814857) Homepage Journal

    Hard realtime is ...HARD.

    To get it right, not only do processes have to be scheduled correctly, so does I/O. Swapping/paging are out of the question. Even with all that, the hardware has to be appropriate as well. SMM and SMI as found in the x86 world these days is simply unacceptable, you can't be hard real time when there is an interrupt you can't mask or whose interrupt handler is not accessible to the OS. Especially when it switches the CPU to a different mode and stays there until it says it's done.

    You also can't have I/O devices that don't make guarantees about the maximum time an operation can take, no matter how often it does it in the "usual" time. This can be a problem for drives that may retry a read or write an arbitrary number of times and might or might not reallocate a sector at the other end of the platter unless it's all fully documented and the application in question doesn't have any deadlines too short to allow for that.

    How far away from all of that a particular device can stray (for the sake of cost mostly) will depend on how demanding the real time is and how much it matters if it misses a deadline. From there there are a zillion arguments if it's hard real time, soft real time, sorta squishy real time, not real time at all, or cheap consumer crap.

    Technically if some event has to be polled and responded to within a month or it'll cost a million dollars, it's hard real time even though an old desktop machine (or a bored night operator) can handle it.

If you want to put yourself on the map, publish your own map.

Working...