Linux Gets Completely Fair Scheduler 274
SchedFred writes "KernelTrap is reporting that CFS, Ingo Molnar's Completely Fair Scheduler, was just merged into the Linux kernel. The new CPU scheduler includes a pluggable framework that completely replaces Molnar's earlier O(1) scheduler, and is described to 'model an "ideal, precise multi-tasking CPU" on real hardware. CFS tries to run the task with the "gravest need" for more CPU time. So CFS always tries to split up CPU time between runnable tasks as close to "ideal multitasking hardware" as possible.' The new CPU scheduler should improve the desktop Linux experience, and will be part of the upcoming 2.6.23 kernel."
crap (Score:5, Funny)
Re:crap (Score:4, Funny)
Kernel building is pretty fast (Score:4, Insightful)
If you really want a rough time, see how long it takes to rebuild a different OS.
Re: (Score:3, Funny)
hmmmm.......... (Score:4, Funny)
Re:crap (Score:4, Interesting)
Re:crap (Score:5, Interesting)
Oh, and the abusive app that likes to make X servers choke? Firefox. Ugh. Hate that thing.
Re:crap (Score:5, Funny)
Re: (Score:3, Informative)
Smart scheduling is no competition for fast code, and KDE wins hard by us
Re: (Score:3, Interesting)
I have never seen any instance where Cairo made something faster.
Well, Inkscape seems to think [inkscape.org] that Cairo makes their rendering faster:
"In this version, Inkscape starts using the cairo library for rendering. It is now used for outline mode display which, thanks to using cairo and other optimizations, redraws faster by about 25%. More impressive are memory savings: thanks to cairo, in outline mode Inkscape now takes only about 50% of the memory used by 0.45 for the same file."
Re:crap (Score:5, Informative)
Well, no offense, but I'm glad it isn't you that's in charge of making important decisions in that case. I realize that you were probably less than half-serious, but I would hate for the Linux community to ever be in the stage where "attract more masses" is a goal that diverts effort from interesting projects like this one.
With that said, what's wrong with Qt/KDE, particularly the new versions (the ones still in Alpha)? I'd say it is very much a "non-ugly GUI lib", and a "sane windowing environment".
Re: (Score:3, Interesting)
Well, maybe you just need to learn how to code write efficient Qt code then... Qt is not the fastest UI library on earth, but it is *not* 'slow as molasses'. We use it on hardware ranging from Pentium-II@500Mhz, to UltraSparcII@400Mhz to dual Opterons, and even on the lowest-end hardware it work
Re: (Score:3, Informative)
Equal opportunity, affirmative action scheduler (Score:5, Funny)
Re: (Score:3, Funny)
Does this mean all apps will play nice? [about.com]
Re:Equal opportunity, affirmative action scheduler (Score:5, Funny)
Except Basic. Nobody likes basic.
Re:Equal opportunity, affirmative action scheduler (Score:5, Funny)
Re:Equal opportunity, affirmative action scheduler (Score:5, Funny)
You mean for.... (Score:5, Funny)
Then there's the American Dream sheduler where you get priority if you work hard at it. You can't just inheret your priority like some rich child process.
Re:You mean for.... (Score:4, Insightful)
Re: (Score:3, Funny)
Also, in Soviet Russia, nice tasks preempt YOU!
Re:Equal opportunity, affirmative action scheduler (Score:5, Funny)
Re:Equal opportunity, affirmative action scheduler (Score:5, Funny)
It's Worse Than That (Score:5, Funny)
A complete fair scheduler for geeks? I can just see it:
Re: (Score:2)
What I'd rather have... (Score:2)
I think I speak for geeks everywhere when I say that I'd rather have the beautiful girl wooing me!
Neato (Score:4, Insightful)
Re:Neato (Score:5, Funny)
Re: (Score:3, Insightful)
Re:Neato (Score:5, Interesting)
Process Neutrality? (Score:5, Interesting)
I know enough about process scheduling to fill a ketchup cup at the nearest burger joint, but it struck me that this sounds like the debate about "network neutrality" vs "tiered service." The O(1) was supposed to be a very generic decision-making system that made a decision in a very agnostic way (to simplify the work down to a predictable consistent order of work). This CFS strikes me as a system which will have a much higher level of complexity and context awareness, which sounds like some processes will get more than others. The intention is to make it fair in the real world but not necessarily balanced, since not all processes are alike in their needs or expectations of task switching.
This is just rambling on, and admittedly it may be straining a metaphor too far, so don't go crazy biting my head off for not knowing all things about the kernel. See 'ketchup cup' above.
Re:Process Neutrality? (Score:5, Informative)
A tiered internet is something else entirely.
Re: (Score:3, Interesting)
The analogy with a tiered internet is fine, provided you look back far enough in the history of computers.
Before personal computers became common, people got a lot of their work done by renting time on mainframes. People that wanted cheap CPU cycles had their jobs wait for spare cycles. Those that needed immediate answers paid more and their jobs got a
Re:Process Neutrality? (Score:5, Informative)
The old scheduler was filled with huge chunks of complex code to try to guess at which processes were interactive and such, and would then specially treat those processes differently when scheduling.
The CFS does none of that. It schedules all processes the same, in a completely fair manner, and doesn't have any special logic in it that tries to classify processes at all, other than nice levels.
The part yet to be merged is the process grouping, which again isn't anything like the interactivity guessing code. It's just a simple way to say "these processes belong together, so when you do the CPU scheduling, treat them as a single group." It's basically just a weighting mechanism with a logical container.
Re: (Score:2, Insightful)
This is not comparable to tiered network service because tiered network service makes decisions on what data packets to carry/drop based on political, legal, and business policies.
As an example, CFS will probably place a low priority on background backup tasks and a hi
Re: (Score:2)
Do you have enough RAM? That sounds more like thrashing than any kind of CPU scheduling issue.
Re: (Score:3, Informative)
It works quite well. I use Con Kolivas' SD scheduler (on which CFS is based), and in a similar situation (with heavy I/O and numerous power-hungry apps), it performs exceedingly well.
Ingo tests CFS with a kernel make -j50 - just to give you an idea of what we're shooting for here.
Prediction ... (Score:3, Informative)
I've sort of gazed for a few seconds at the CFS articles and the following phrase caught my attention the most
But more importantly, I think the factor which'll probably sway me the most is /proc/sys/kernel/sched_granularity_ns. Except I've been salting my config options with one true test [slashdot.org] ... that kind of thing makes you paranoid about random tune-ups :)
For the attention of karma whores (Score:5, Funny)
Steal your insightful comments from http://linux.slashdot.org/article.pl?sid=07/04/22/ 1335255 [slashdot.org]
Re:For the attention of karma whores (Score:5, Insightful)
"Kernel trap has a nice summary of what is going on behind the scenes to change the Linux Scheduler. The O(1) Linux scheduler is going to be changed so that it is fair to interactive tasks. You will be surprised to know that O(1) is really too good not to have any side-effects on fairness to all tasks."
Isnt this called Cron? (Score:4, Funny)
I thought Linux used Cron as a scheduler ?
Re: (Score:2, Informative)
This is for scheduling CPU resouces in real time. To decide if Firefox or Apache is going to be executed the following split second.
Why... (Score:5, Funny)
"Why isn't my process getting more CPU time?"
"Well, Sir, it's a Completely Fair Scheduler."
Found the punchline (Score:2)
how it's possible? (Score:2, Informative)
really? and how it's suppose to do that wonderful thing?
ps: i'm just curious and noob, so please don't smash me...
Re: (Score:3, Informative)
The tradeoff with short timeslices is that there's more overhead due to context switches and so the overall time s
Re: (Score:2)
Re:how it's possible? (Score:4, Interesting)
Actually, no, Gnome and KDE aren't the troublemakers. It turns out that certain X drivers are poorly written and X preempts processes vying for CPU. CFS helps improve the situation - almost to the point where you don't notice it.
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Right now, most applications get "time", even if they don't need them... so, you are "wasting time" being a "good kernel/waiter" by going to your customer (process), and asking if he needs something more, just to wait for a "no" as answer.
Seriously? What, the kernel switches to a process, the process checks its environment and figures out that the event it was waiting for hasn't happened yet, and goes back to sleep? I can't believe that a project as mature as the Linux kernel would use a scheduler like that. That sounds more like the result of trying to squeeze a scheduler into 256 bytes so you can lock it into two cache lines. I mean seriously, it's like cooperative multitasking with preemption...
I know it's a little "old school", but
Re: (Score:3, Insightful)
Re:how it's possible? (Score:5, Informative)
No, CFS does not do that, and that would be quite silly to do indeed :-)
CFS keeps tasks that woke up in the runqueue, and allows them to run immediately in the typical case - just like the old scheduler did.
Where CFS differs from the old scheduler is mainly the case when there are more tasks runnable than there are CPUs/cores available. In such cases, on any modern multitasking kernel, the scheduler has to decide which task to run, and in what order and weight to run those tasks, with the goal to provide to the user the happy illusion of multiple, snappy applications running at once.
The old O(1) scheduler decided the "order and weight" of runnable tasks based on an pretty elaborate set of heuristics. The rules are pretty complex, but it mostly boils down to 'sleepers get more CPU time than runners'.
(sidenote: CFS is an O(1) scheduler too for all practical purposes, with an upper limit of ~15 algorithmic steps worst-case)
Now those heuristics worked pretty well for 15 years (those sleep-heuristics were always part of Linux scheduling, the O(1) scheduler i wrote inherited them from the original O(N) scheduler), but good is never good enough in the land of Linux ;-)
How does CFS work? CFS follows an approach similar to Con Kolivas' SD project: a scheduler core that instead of heuristics uses "fair scheduling" to achieve interactivity. Runnable tasks are scheduled in a painstakingly fair way (and that seemingly simple concept alone is pretty hard to achieve in a general purpose kernel).
The simplest case is when there are only CPU-intense tasks running. For example, if there are 8 CPU-intense tasks running on the CPU, each task gets exactly 12.5% CPU time. If you watch how much CPU time the tasks get it will be 12.5% long-term too, with no deviations, with no skewing caused by other tasks running inbetween.
The more complex case is when applications schedule frequently (and that is the case on most desktops and servers), so CFS extends the concept of 'fairness' to sleeping tasks too. CFS accounts not only 'runners', but 'sleepers' too. Tasks that sleep/run frequently are still given their full 'fair share' of the CPU, up to the limit they could have gotten were they not sleeping at all.
So for example, if you have two tasks on a CPU, one a 100% CPU hog, the other one an application that sleeps/runs 50% of the time - both will get 50% of the CPU in CFS. Under the strict 'runner fairness' approach (which for example SD is following), the 100% CPU hog would get ~66% of CPU time, the sleeper would get ~33% of CPU time.
To achieve 'sleeper fairness', CFS runs the (ex-)sleeper task sooner, to offset its disadvantage of not hanging around on the CPU all the time. Or in other words: interactive tasks (tasks that sleep often) will get to the CPU with lower latencies. Which is the holy grail of good desktop scheduling :-)
(granted, CFS does a whole lot more than that, its patch-impact size is 3 times larger than SD. CFS is not a single patch but a series of 50 patches, which also modularize kernel scheduling policy implementation (note, it does not modularize the scheduler itself a'la PlugSched), offer "group scheduling" (nifty thing for containers/virtualization and large systems, written by Srivatsa Vaddagiri of IBM), offer precise CPU usage accounting to /proc (used by CPU/task monitoring tools), and much more. We decided to turn Linux scheduling upside down, which gave me the easy excuse^H^H^H opportunity to extend the scheduler's design a bit more ;-)
Politics are destroying Linux too (Score:5, Insightful)
Too bad that the NIH syndrome hit Linux Kernel development too, and Ingo Molnar, after blocking all the attempts to merge SD into mainline because "it couldn't be done", uses the same idea, whips out his own scheduler calling it "Completely Fair", and woosh it gets merged (easily, given that Ingo Molnar himself is the maintainer of that part of the kernel).
Con Kolivas is (obviously and justifiably) disgruntled, to say the least, he stops working on the SD project, and Linux loses an excellent developer because of politics.
Angsty nerds are not destroying Linux (Score:5, Insightful)
Back when I was a maintainer, I guess I rewrote half the patches I got. Most submitters are just happy to see the functionality in there, but there was a few people with fragile egos take it as a personal insult That happens, life goes on, and usually the fragile egos grow more robust with time, and learn that developing what amounts to a prototype of the final code is also a valuable contribution.
Re:Politics are destroying Linux too (Score:5, Informative)
> response.
I'd understand if Ingo doesn't want to comment on this; it was a painful clash between two competent and strong characters, which expanded to other parties accusing Ingo of elitism and plagiarism.
For reference, this was archived on kerneltrap.org, and I believe it was covered in an earlier
For what it's worth, here's the "facts" as I see them :
1/ It looks as though Ingo *and*Linus* refused Con's original patch on certain grounds which weren't clearly understood/communicated. Ingo, however, stated that in general he was "quite positive about the staircase scheduler." He proceeded to test it and gave Con feedback.
2/ Con's work was good enough that Ingo about-turned on his earlier, negative stance about fair schedulers and was inspired to go and develop something very similar (but which fitted better with the overall kernel architecture). It's clear that this was predominantly Ingo's own code (hence no plagiarism), and Ingo credits Con in the code comments for coming up with the general approach.
3/ Somewhere in the middle of the ensuing discussion on lkml there are complaints that Con wasn't kept in the loop. However, Ingo cites examples where he *did* communicate to Con; by Con's own admission he was very ill (hospitalised) during a critical period.
4/ Parent suggests that Con has since stopped contributing to the kernel. I don't see any indication of this in the kernel thread - in fact Con's post gives every indication that he'll continue to contribute.
My analysis :
I put the situation down to an applied case of "standing on the shoulders of giants". It's very rare that anyone creates something completely new, and in large projects this can occasionally generate friction.
Con was in a susceptible condition when the CFS code was released, had a grumble on the list, but generally acted pretty maturely. Ingo credited Con's contributions wherever feasible, clarified this in discussion, and stayed polite and friendly throughout. End of story.
What's pretty disgusting is the partisan name-calling that follows in the KernelTrap comments. "Shame on Ingo", "Con is acting like a baby", etc. I hope that this doesn't generate bad feeling between Molnar & Kolivas, because after Con's original complaint on lkml and Ingo's response things seemed to be settled.
No doubt in future Ingo will take an increased amount of care about vetting other people's code, not promoting his own to the exclusion of others, and crediting other people in his own work (note: I don't claim that he has been lacking in this respect in the past). Con, likewise, will doubtless be mollified when his contributions are more readily recognised as being of merit in future. In the meantime Linus has emphasised that competition between developers is a *good* thing to a reasonable extent, as it directly increases motivation.
Now, I suggest that everyone else with a ready opinion hold their breath a while, and let all them get on with coding.
Conrad
Re: (Score:3, Informative)
No, Kolivas has definitely withdrawn from kernel development. From his -ck mailing list post:
Re: (Score:3, Informative)
Hm, that seems to be more of a VM/IO-scheduling problem than a process scheduling problem.
Did you have a chance to try Peter Zijlstra's excellent per-bdi patches, as suggested in the bugzilla?
But in general, CFS ought to improve such workloads too (to a limited degree), in terms of not making any IO starvation worse by adding CPU starvation to the mix :-)
Re: (Score:3, Informative)
50 seconds of that time, it spent running process A.
50 seconds of that time, it spent running process B.
The 50 seconds of A may be distributed differently by different algorithms.
In some algorithms, A will run for 50 seconds, and then B will run for 50 seconds.
Obviously, this is not the best when you want some interactivity...
In other algorithms, the running of A and B will be interspersed, for instance, A may run for 200ms, followed by B for 200ms, etc. until
Re: (Score:2)
Right now, most applications get "time", even if they don't need them... so, you are "wasting time" being a "good kernel/waiter" by going to your customer (process), and asking if he needs something more, just to wait for a "no" as answer.
Yeesh. How on earth did you get the idea that you should be commenting about how the Linux scheduler works, since it appears that you don't know what you are talking about. Basically, only processes with task->state == TASK_RUNNING get CPU time. Processes that are waiting for I/O to complete or for a signal to be delivered don't have task->state == TASK_RUNNING.
Re: (Score:2)
Then you will definitely appreciate the new scheduler. It improves nearly all common cases, and (based on my experience with Con's SD scheduler, which CFS is "inspired" from) it makes the whole system snappier, more responsive, and more usable. It's really quite a night-and-day difference.
Can anyone compare this to Jonathan Lemons Kqueue? (Score:4, Interesting)
We saw crazy performance improvements implementing kqueue in bsd, would love to see something that great at handling many sockets standard linux.
Re:Can anyone compare this to Jonathan Lemons Kque (Score:5, Informative)
On the other hand, Linux has epoll, which fills the same role as kqueue.
In my experience, epoll is at least as good.
http://www.kegel.com/c10k.html#nb.epoll [kegel.com]
Now MacOS X needs to fix their kqueue bugs, and the world will be a happy place.
Questions (Score:2, Interesting)
--mm line (Score:5, Informative)
I've reen running with it for some time, and I really like it. I'm still not sure if it is better than Con Kolivas' SD scheduler in his patchset, but we'll see.
I liked the old O(1) scheduler (Score:2)
Anyone with kids can tell you... (Score:5, Funny)
Re: (Score:3, Funny)
Re:Anyone with kids can tell you... (Score:4, Funny)
Completly fair = communist? (Score:3, Funny)
Re: (Score:3, Funny)
Improve how? (Score:3, Interesting)
Could someone outline concrete problem the Linux desktop scheduling had right now that are visible resolved by CFS.
I'm not a heavy user of Linux desktop (just servers on the shell), but it was always my experience that Linux handles simultaneous multimedia tasks (for example) better on the same hardware than Windows.
While I contribute this more to architectural problem on the Windows side (such as.. it's quite easy an app to stall Explorer.exe or vice versa, no amount of scheduling helps there), I'm curious to see if there's tangible difference someone could describe with CFS running desktop software in Linux.
Re: (Score:3, Informative)
Basically right now the scheduler is unbiased, giving ticks to all applications regardless of their need for processing time. An example of this would be in X windows when you have little taskbar icons that rarely do anything, vs having cd burning software running.
The scheduler will quickly learn that most of the time it asks the taskbar application if it needs to do anything, it doesnt, and that most of the time it asks the cd writing software to d
Re: (Score:3, Informative)
How did this rubbish get modded informative ? Is it someone's idea of a joke ? Or do people simply apply the "info
Re:Improve how? (Score:4, Informative)
CFS and Con Kolivas' SD both aim to improve interactivity of processes under high load - in particular, the goal was to reduce scheduling latency for applications which have realtime needs - like audio players. Con Kolivas has been maintaining variations no his low-latency Staircase design for several years with precisely that goal in mind.
On the desktop, it improves latencies for (for example) music players and 3D games, improving performance and elimingating jitter, lag, and general choppiness. Both SD and CFS achieved this under loads as high as 50.
On the server, it can have several benefits, including improved time-to-network latencies. They both want and need test cases for servers that show no detrimental effects. If you want to help, you can try out CFS on a server and report to Ingo if there are performance or latency issues.
Re: (Score:3, Interesting)
If I run 50 processes that are spinning, each one of them will get just as much CPU time as
Poor attribution (Score:4, Insightful)
So little credit is given to Con Kolivas, whose Staircase Deadline scheduler (a more mature and refined design than CFS) spurred Ingo to finally improve his scheduler (which he wrote on the spot because, apparently, Con's scheduler wasn't good enough for him).
And all Con gets is a minor footnote.
Re:Poor attribution (Score:5, Informative)
[ck] It is the end of -ck [bhhdoa.org.au]
This is pretty sad for linux kernel development.
Re:Poor attribution (Score:4, Informative)
Re: (Score:3, Insightful)
Besides, Con clearly aired his side of the story in public. Are you saying Linus shouldn't be given the opportunity t
Re:Poor attribution (Score:5, Informative)
> And all Con gets is a minor footnote.
I'm a kernel developer myself and quite surprised you see it that way.
Let's take a look at the kernel code:
1) Ingo credited Con for the "fair scheduling" approach right on the first page of kernel/sched.c. That's the
most prominent place you can get credited for working on the Linux scheduler
* 2007-04-15 Work begun on replacing all interactivity tuning with a
* fair scheduling design by Con Kolivas.
2) He credited Con for a line of code that he added to CFS from SD, in kernel/sched.c
* This idea comes from the SD scheduler of Con Kolivas:
This is the only SD code in CFS - the two designs and approaches are quite different.
3) He credited Con in Documentation/sched-design-CFS.txt
I'd like to give credit to Con Kolivas for the general approach here:
he has proven via RSDL/SD that 'fair scheduling' is possible and that
it results in better desktop scheduling. Kudos Con!
4) Finally he credited Con in the CFS commit log as well:
commit c31f2e8a42c41efa46397732656ddf48cc77593e
Author: Ingo Molnar
Date: Mon Jul 9 18:52:01 2007 +0200
sched: add CFS credits
add credits for recent major scheduler contributions:
Con Kolivas, for pioneering the fair-scheduling approach
Peter Williams, for smpnice
Mike Galbraith, for interactivity tuning of CFS
Srivatsa Vaddagiri, for group scheduling enhancements
Signed-off-by: Ingo Molnar
I don't see much more places, where credit could be documented.
tglx
Re: (Score:3, Informative)
Basically Ingo Molnar, the author of CFS, who is also the maintainer of the scheduler in the kernel, opposed the inclusion of the competing SD scheduler from Con Kolivas for years. Then he claimed that he was just suddenly inspired to whip up a new scheduler that addresses the exact same problems. He then did so in "62 hours".
If you start at this point and read the next 20 or so
Does this mean that the O(1) scheduler was bad? (Score:2)
Re: (Score:3, Informative)
The Linux O(1) scheduler has been around since 2002.
It's pretty good, but there are corner cases where you can fool it. For example, if a process classified as interactive goes CPU-bound,
CFS vs. O(1) (Score:5, Informative)
(disclaimer, i'm the main author of CFS.)
I'd like to point out that CFS is O(1) too.
With current PID limits the worst-case depth of the rbtree is ~15 [and O(15) == O(1), so execution time has a clear upper bound]. Even with a theoretical system that can have 4 million tasks running at once (!), the rbtree depth would have a maximum of ~20-21.
The "O(1) scheduler" that CFS replaces is O(140) [== O(1)] in theory. (in practice the "number of steps" it takes to schedule is much lower than that, on most platforms.)
So the new scheduler is O(1) too (with a worst-case "number of steps" of 15, if you happen to have 32 thousand tasks running at once(!)), and the main difference is not in the O(1)-ness but in the behavior of the scheduler.
Re: (Score:2, Interesting)
Maybe I'm remembering the worst case behaviour of rbtree wrong.
Re:CFS vs. O(1) (Score:5, Informative)
Frankly big O notation isn't a very good way to describe scheduler performance. Execution time under common loads, and maybe an extreme case would be better. Who cares about an O(1) scheduler that always takes 1 second to schedule the next task
Re:CFS vs. O(1) (Score:4, Informative)
This kind of a silly thing to say. I mean, all terminating algorithms on a finite machine are O(1) ultimately.
For example, your 1 gig machine only has 2^(1024*1024*1024*8) states it can go through to reach an answer, not including disk IO... and as we all know, O(2^[1024*1024*1024*8]) =~ O(10^2585827972) = O(1).
Re: (Score:3, Informative)
Re:CFS vs. O(1) (Score:4, Insightful)
Re:CFS vs. O(1) (Score:4, Interesting)
Re:CFS vs. O(1) (Score:4, Insightful)
(To answer your question: the 20-21 comes from other limits to the task space - right now we are still limited to 32k pids.)
Yes, you are right, operations on an rbtree of an arbitrary data structure are of course an O(log2(N)) algorithm, no argument about that.
I know what the mathematical meaning and definition of the big/little ordo/theta notations is (probably better than i should ;), I only wanted to point out the fact that an O(log2(N)) algorithm for most data structures in the kernel (or elsewhere on today's computers) is equivalent to O(1) in practice, especially if N is fundamentally limited to 15 bits like in this case!
The main purpose of the ordo/theta notations is to be able to talk about and compare the performance (worst-case/best-case/average-case) qualities of algorithms. Sticking to their strict mathematical definition in cases where it departs from their original purpose results in worse software :)
And talking about big ordo differences between algorithms operating in finite machines still makes sense (naturally): for example, O(sqrt(N)) is not equivalent to O(1) in practice - it can still be very large, even with a pretty limited N. O(N) is also obviously very relevant in practice, even on very limited machines. But the difference between O(log2(N)) and O(1) is insignificant in most cases, and in fact it is deceiving in this case. (as i pointed it out with the O(140) example.)
This is politics, not programming (Score:4, Interesting)
This shows the black side of open source. Con developed SD in the open and Igno stole his ideas. It was only after people started pointing out that CFS looked _very_ similar to SD that Igno even admitted that the design was based on Con's SD work.
The only reason CFS is in the kernel and not SD is politics.
Of course.. (Score:5, Funny)
Re: (Score:2, Interesting)
Of course these applications have had years of tuning under SOlaris so it's not an entirely accurate example...
Re:Cool (Score:4, Insightful)
Re: (Score:3, Interesting)
I do hope this scheduler will make things even better: gracefull degrading and responsiveness in one. Might make it the ideal OS for my needs (I now have Linux on the desktop and FreeBSD on the servers).
Ok, here's your Microsoft bash (Score:4, Funny)
You can download it here. [cygwin.com] Screenshots here. [khngai.com]
Re: (Score:2, Informative)
Cron schedules tasks to execute at specified times. This article refers to the kernel's CPU scheduler which determines which running process gets to use the CPU at any given moment.
Re: (Score:3, Informative)
The man page is worthless (and if the universe had any sense of justice many of the Linux man pages would be rewritten).
If one has a shell command file, "loop" containing...
EXPR=1; while true; do EXPR=$[ $EXPR + 1 ]; done
and one says:
nice -19
then CPU usage goes to 100% and a glance at the nice column on the System Monitor reveals that a shell is running "loop" with a nice value of "19", i.e. the system is quite respo
Re: (Score:3, Insightful)
1) No I/O awareness. When copying a bunch of big files around, I want that process to have lower possible priority, and not interfere with other system activities, like opening a new program, or doing small I/Os. Bottom line: give bulky transfers idle priority.
2) Lack of idle priority. I want to be able to run a process that only gets CPU time if there's nothing else to do. Even with the lowest possible priority, it will still eat some pre