Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux

Deadline Scheduling Proposed For the Linux Kernel 113

c1oud writes "At the last Real-Time Linux Workshop, held in September in Dresden, there was a lot of discussion about the possibility of enhancing real-time capabilities of Linux by adding a new scheduling class to the Linux kernel. According to most kernel developers, this new scheduling class should be based on the Earliest Deadline First (EDF) real-time algorithm. The first draft of the scheduling class was called 'SCHED_EDF,' and it was proposed and discussed on the Linux Kernel Mailing List (LKML) just before the workshop. Recently, a second version of the scheduling class (called 'SCHED_DEADLINE,' to meet the request of some kernel developers) was proposed. Moreover, the code has been moved to a public git repository on Gitorius. The implementation is part of a FP7 European project called ACTORS, and financially supported by the European commission. More details are available."
This discussion has been archived. No new comments can be posted.

Deadline Scheduling Proposed For the Linux Kernel

Comments Filter:
  • by guruevi ( 827432 ) on Tuesday October 20, 2009 @09:36AM (#29806999)

    I used to be into the Linux kernel a couple of years ago but since then I haven't really followed it anymore. What's the difference between these scheduling algorithms and do they work better than the current scheduling system?

  • by jhfry ( 829244 ) on Tuesday October 20, 2009 @09:38AM (#29807019)

    Has a deadline based scheduler been done before? It seems like an excellent idea for time sensitive (real time) processing. I have worked with RT os's before, iRMX mostly, and always wondered how the scheduling worked.

  • European Projects (Score:4, Interesting)

    by Lemming Mark ( 849014 ) on Tuesday October 20, 2009 @09:41AM (#29807065) Homepage

    The EU-funded projects are somewhat interesting in my experience. They tend to fund both academics and researchers from industry to do stuff and the projects tend to be more focused on practical results than a normal project funded by a research council. They can still generate research papers, etc, but there's more of an emphasis on producing new code that can actually be *used* to do stuff that wasn't available before. Whereas more academic research normally focuses on getting the code sufficiently robust that papers can be published about it, then it's often forgotten.

    I think the more practically focused work of this kind is valuable and would like to see more. It is less "valuable", academically and as such I suspect academics are less inclined to attribute prestige to those who have worked on it. It would be nice to see a bit more glory given to folks who work on these projects (disclaimer, I have done a *very* small amount of work on one myself) as a valid direction vs industry or academia. Also, this mode of development does remind me a little of some of RMS's writings about how Free Software development could be funded - here we have effectively a government body giving money to worthy causes, as represented by a team of interested experts, to enhance open source software for everyone involved in reasonably directed ways. Ideally it'd be nice to see "get stuff upstream" be a completion goal for these projects, I'm not sure to what extent that is already true.

  • by Chrisq ( 894406 ) on Tuesday October 20, 2009 @09:42AM (#29807095)
    Is this suitable as a general purpose scheduler or is it just for real-time systems?
  • by Anonymous Coward on Tuesday October 20, 2009 @09:46AM (#29807143)

    These algorithms will produce substantially worse overall performance in all workloads. However, they allow absolute deadlines to be set for certain tasks. This is mostly useful for embedded devices -- if you're creating a medical device, or a subsystem for a plane, a 20% performance hit to guarantee you don't delay critical tasks for a couple seconds and get people killed isn't even a decision worth thinking about.

    This would make Linux a legitimate real time ("RT") kernel. There are RT Linuxes already, but they suck to work with -- I believe RTLinux (one of the RT variants), for example, requires all RT tasks to be in kernel-space.

    The upshot is that this is huge for Linux in certain business areas (and other RT OSes are currently quite pricey), but totally useless for your desktop or home server.

  • by conspirator57 ( 1123519 ) on Tuesday October 20, 2009 @09:49AM (#29807177)

    yes it performs better. for certain workloads. with correct usage.

    like anything else it's a tradeoff. in this case you (or your application developer) have to be aware of how the scheduler works and be able to assign valid relative priorities and deadlines. Current schedulers you might have to worry about priority, but usually you don't. You also have to work out a way to work out utilization and negotiate fallback compute requirements based on the user's workload (other apps competing for the resource).

    Shortly, this scheduler is immediately useful for people making appliances (special purpose computers, e.g. a network firewall/router/voip box). It is less immediately useful for the desktop user, but i could imagine a set of circumstances that would make it very useful. The reason is that the appliance designer knows the compute workload fairly well and can take the time to assign priorities and deadlines for each process under each condition. When tools are made to automate this process on the fly, then desktop users will be able to open a bunch of crap and never have to worry that their voip app is going to stutter.

  • by Lemming Mark ( 849014 ) on Tuesday October 20, 2009 @10:02AM (#29807359) Homepage

    Deadline scheduling is well established and has been done many times, in many flavours on other OSes. It's probably even been done on Linux before. But if this one gets upstream with the blessing of the kernel community, it would enhance Linux for everyone rather than just those running particular kernel patches.

    Linux seems to be having a lot of realtime-related work (see PREEMPT-RT, a somewhat separate but related area of work) done, which is interesting - I would have said that the conventional wisdom was that large, general-purpose OSes cannot be realtime-ified. It seems like certain parties are determined to prove this wrong - and it's looking to me somewhat like getting to "good enough" realtime behaviour will make large segments of users happy even though it's perhaps unlikely to ever replace ground-up realtime OS designs.

  • by Nadaka ( 224565 ) on Tuesday October 20, 2009 @10:11AM (#29807497)

    I think the question he is posing is:

    Is this type of scheduler perfect? Capable of real time, batch jobs and the mixed fruit bowl of jobs on a typical desktop or server?

    The answer is, probably not. There really is no such thing as a perfect scheduler. Different kinds of workloads have different requirements.

    A busy real time scheduler will tend to starve low priority jobs. This can become an issue if those low priority jobs manage to grab limited resources they are never given the time to use. As those resources dwindle, real time jobs will be harder to satisfy and the low priority jobs must be terminated or given time to release those resources. I can't really say for certain, but it looks like EDF would have these same kinds of issues.

    Interesting story a professor of mine told: An old university mainframe was brought offline after decades of operation. A core dump was performed and investigation revealed that there was a process that had been waiting to run for close to 30 years. Somehow, its priority was set to be lower than the idle process and this particular machine did not have automatic escalation of priority in its scheduler.

  • by shish ( 588640 ) on Tuesday October 20, 2009 @10:37AM (#29807899) Homepage

    These algorithms will produce substantially worse overall performance in all workloads.

    Overall performance as in userspace apps will take 20% longer to perform CPU-bound tasks; or overall as in the scheduler will perform much worse ("the scheduler will now take 0.02% of CPU time, having previously taken 0.01%")? I think it's pretty important to specify over all of what...

  • by RiotingPacifist ( 1228016 ) on Tuesday October 20, 2009 @10:40AM (#29807941)

    For those that didn't catch it CK is back with the Brain Fuck scheduler [wikipedia.org], which improves desktop interaction, by ignoring the past. I CBA to recompile but the patches are here [kolivas.org] and while the chances of it getting into mainline are "LOL", it is being adopted by android.

  • by koiransuklaa ( 1502579 ) on Tuesday October 20, 2009 @11:59AM (#29809371)

    Got a reference for "being adopted by android"? Just an experimental git tree on android.git.kernel.org doesn't prove much...

  • by Lisandro ( 799651 ) on Tuesday October 20, 2009 @02:59PM (#29812539)

    The upshot is that this is huge for Linux in certain business areas (and other RT OSes are currently quite pricey), but totally useless for your desktop or home server.

    I don't think so. If this means we'll get a pluggable scheduler architecture in Linux, i'm all for it.

    I thought it was all hogwash until i tried Ken Colivas' BFS patch [kolivas.org]. The difference it makes on a desktop system is notable, and it clearly demonstrates that having a single scheduling solution for a kernel oriented for everything from embedded systems to desktops to 4096-CPUs servers is insane.

  • by BikeHelmet ( 1437881 ) on Tuesday October 20, 2009 @09:35PM (#29817863) Journal

    You can take your superior benchmarks and shove it. On older hardware, the difference in responsiveness with BFS is absolutely astounding.

    Those tests are multi-processor multi-core runs, which is not what BFS was designed for. I would ask you to bench it on a single single-core, dual-core, tri-core, and quad-core CPU before making such statements.

    In my own tests on a shitty VIA C7 with a horribly slow(10MB/sec) Quantum EIDE(I think) drive, BFS dropped the times to launch programs almost in half. I'd place a bet that CFQ is doing some stupid shit optimized for high performance servers.

    And at least a few real-world tests heavily favour BFS [phoronix.com]. But I personally despise meaningless numerical benchmarks. I much prefer watching desktop responsiveness soar on old hardware.

Today is a good day for information-gathering. Read someone else's mail file.

Working...