The Really Fair Scheduler 199
derrida writes "During the many threads discussing Ingo Molnar's recently merged Completely Fair Scheduler, Roman Zippel has repeatedly questioned the complexity of the new process scheduler. In a recent posting to the Linux Kernel mailing list he offered a simpler scheduler named the 'Really Fair Scheduler' saying, 'As I already tried to explain previously CFS has a considerable algorithmic and computational complexity. This patch should now make it clearer, why I could so easily skip over Ingo's long explanation of all the tricks CFS uses to keep the computational overhead low — I simply don't need them.'"
Coming soon to a linux kernel near you: (Score:3, Funny)
Fuck this. (Score:5, Funny)
Re:Fuck this. (Score:4, Funny)
Re: (Score:3, Funny)
Why not swappable? (Score:3, Interesting)
After all, isn't that the idea of open source software -- may th
Re:Why not swappable? (Score:5, Informative)
Re: (Score:3, Interesting)
It's doable (easy, even), it doesn't require significant investment from a kernel maintenance perspective, and it cuts through a fair bit of politicking.
Re: (Score:2)
Re: (Score:3, Interesting)
I expect that there would be a per
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Not quite accurate (Score:3, Interesting)
Re:Not quite accurate (Score:4, Funny)
Re: (Score:2, Informative)
As I said, this is more about management and politics than a choice based on technical details. Personally I don't care which schedular won, but it wasn't
Re:Coming soon to a linux kernel near you: (Score:4, Funny)
Re:Coming soon to a linux kernel near you: (Score:5, Funny)
Re: (Score:3, Funny)
Oh, FFS.
It's time for a paradigm shift (Score:2)
Still waiting for the IFS (Score:5, Funny)
Re:Still waiting for the IFS (Score:5, Funny)
God forbid we drop the lower-case I naming convention. It stands for "interwebs compatible".
Re: (Score:3, Funny)
Where else would you keep your iLawnmower?
Re: (Score:2)
We could call it the "International Scheduler Fair".
Re:Still waiting for the IFS (Score:5, Funny)
Re:Still waiting for the IFS (Score:4, Interesting)
Re: (Score:2)
Upgrade to 1GB of RAM (2GB on Intel) and you won't see it anymore. (usually.)
-:sigma.SB
Re: (Score:2)
I see the pinwheel many times a day, and that's on a fully tricked out MacBookPro.
Re: (Score:3, Informative)
Upgrade to 1GB of RAM (2GB on Intel) and you won't see it anymore. (usually.)
-:sigma.SB
Depends a lot on your situation.
Even with many many gigabytes of ram there are many situations where Apples applications (or the os) just sit there and do nothing (or spinning that pinwheel like they've nothing better to do) and you wonder if they crashed or what... Often enough, no. They're just doing the wrong or stupid thing and it eventually recovers. How often you see it depends a lot on your usage pattern.
None of these (near as I can tell) have anything to do with the scheduler. Just shoddy code and
On my old work machine (Score:2)
Re: (Score:2)
Oh wait. Wrong OS. Sorry.
FWIW, I get the pinwheel on my 1gb macbook sometimes while I'm in firefox sometimes. My "real" box that I do most of my work on runs Vista
Re: (Score:2)
Yeah, that's one thing Microsoft got right. I mean, it's an HOURGLASS that never stops running! Incredible!
Oh wait. They replaced it with a teal pinwheel in Vista, I forgot. Pfft.
Re: (Score:2)
Re: (Score:2)
-
Does it... (Score:4, Interesting)
Re:Does it... (Score:5, Informative)
You could tweak things to make this a less likely ocurrence though.
Disable overcommit by echo 2 >
Set some memory limits in
Avoid having too much swap space. It's awfully slow, if you're using it too much all you'll manage is to run more things slower.
Get more RAM, it's cheap. If you're regularly swapping then you definitely should.
Re: (Score:2)
I'm far from knowledgeable about what's possible to do right now using various tuning knobs. I guess I'm surprised that the GUI doesn't get priority over this sort of runaway process, but I have to temper this with saying that I never played with adjusting the nice level of various relevant processes.
Increasing the RAM size is not a solution though, since the kind of runaway process that causes the freeze will allocate everything it can anyway.
Re: (Score:2)
That is because the GUI is just a set of processes running under the same mechanism, not some special part of the kernel or something like that.
Re: (Score:2)
Re: (Score:2)
So if you have a program that hogs the CPU - be nice(1) to it!
Re: (Score:2)
For one, in Linux the process priority us dynamically adjusted. So a program that hogs the CPU will automatically decrease in priority so that it gets all CPU time remaining after other processes that use little CPU have got their share. It will not really starve lower-priority processes, as happens on a completely priority determined scheduler with static priorities (found in realtime kernels, in Windows NT, etc).
But, another issue is that a process that makes the system slo
Re: (Score:2)
Re: (Score:2)
Say, what if you really need to run a process that causes the box to swap like mad? It could be that you're say, trying to build MAME, which seems to have a couple of files that make gcc consume about 512MB RAM. Now what if you need to do this on a box with just 384MB? Having the scheduler keep pausing it would only make it longer.
Then, the most evil type of swap death is a positive feedback loop. For example, mail servers. Too
Re:Does it... (Score:5, Informative)
FreeBSD likes lots of extra swap space. An idle system will notice that some process hasn't run in a month and will push it to swap, proactively freeing RAM for something else that might want it. Note that it will only page out a process's data segment; it's code segment uses the filesystem itself for paging (why copy "firefox" into swap when there's already a perfectly readable copy on the filesystem?).
Unless, of course, you unlink its executable file, in which case it allocates swap to hold the file [freebsd.org] first. Which also illustrates that while unnecessary computational complexity is bad, willingness to do complex things when the situation demands can lead to some pretty cool stuff.
Re: (Score:2)
Re: (Score:2)
Not to mention, despite what BSD does to proactively free RAM you don't want to do that unless there is a shortage of ram in the first place. After all, the program that has been idle for a month might kick up and do something and if nothing else needs the ram it is using, it will be more responsive if it is still in RAM than if it is sitting in swap on a box wit
Misunderstanding... (Score:3, Interesting)
Non-proactive case:
-kernel sees malloc, knows it lacks physical memory to accommodate, malloc is blocked while kernel does housekeeping.
-kernel picks the appropriate amount of pages to write to swap, then writes those pages to swap space, taking a while since block storage IO is excruciatingly slow.
-A
Re: (Score:2)
Re: (Score:3, Informative)
ulimit -v 4096
command_that_uses_memory
This will limit the amount of memory available to command_that_uses_memory, and kill it once that limit is reached. But do you really want firefox forcibly killed every time you visit youtube?
Re:Does it... (Score:5, Funny)
Re: (Score:2)
-
Re: (Score:2)
Just curious, it has been many years since freebsd offered me performance advantages than linux. These days it's pretty much the contrary, the last time I tried the supposedly SMP-optimized newest versions of freebsd, the system would fall into the FreeBSD's Big Giant Lock doing some simple dist tasks in a 2-CPU machine. And when I want a BSDish unix OS I've opensolaris....
Re: (Score:2)
I ctrl-alt-Fn always works, as does ctrl-alt-backspace, but if those don't work there are much more serious problems for me to worry about.
As long as I've been using freebsd, I have had no problems with the scheduling. The scheduling for Linux is hopefully better now, because last time I loaded it up the scheduler was compl
Interestingly rigorous (Score:3, Interesting)
Re:Interestingly rigorous (Score:4, Informative)
On Fri, 31 Aug 2007, Ingo Molnar wrote:
> So the most intrusive (math) aspects of your patch have been implemented
> already for CFS (almost a month ago), in a finegrained way.
Interesting claim, please substantiate.
> Peter's patches change the CFS calculations gradually over from
> 'normalized' to 'non-normalized' wait-runtime, to avoid the
> normalizing/denormalizing overhead and rounding error.
Actually it changes wait-runtime to a normalized value and it changes nothing about the rounding error I was talking about. It addresses the conversion error between the different units I was mentioning in an earlier mail, but the value is still rounded.
> > This model is far more accurate than CFS is and doesn't add an error
> > over time, thus there are no more underflow/overflow anymore within
> > the described limits.
> ( your characterisation errs in that it makes it appear to be a common
> problem, while in practice it's only a corner-case limited to extreme
> negative nice levels and even there it needs a very high rate of
> scheduling and an artificially constructed workload: several hundreds
> of thousand of context switches per second with a yield-ing loop to be
> even measurable with unmodified CFS. So this is not a 2.6.23 issue at
> all - unless there's some testcase that proves the opposite. )
> with Peter's queue there are no underflows/overflows either anymore in
> any synthetic corner-case we could come up with. Peter's queue works
> well but it's 2.6.24 material.
Did you even try to understand what I wrote? I didn't say that it's a "common problem", it's a conceptual problem. The rounding has been improved lately, so it's not as easy to trigger with some simple busy loops. Peter's patches don't remove limit_wait_runtime() and AFAICT they can't, so I'm really amazed how you can make such claims.
> All in one, we dont disagree, this is an incremental improvement we are
> thinking about for 2.6.24. We do disagree with this being positioned as
> something fundamentally different though - it's just the same thing
> mathematically, expressed without a "/weight" divisor, resulting in no
> change in scheduling behavior. (except for a small shift of CPU
> utilization for a synthetic corner-case)
Everytime I'm amazed how quickly you get to your judgements...
BTW who is "we" and how is it possible that this meta mind can come to such quick judgements?
The basic concept is quite different enough, one can e.g. see that I have to calculate some of the key CFS variables for the debug output. The concepts are related, but they are definitively not "the same thing mathematically", the method of resolution is quite different, if you think otherwise then please _prove_ it.
bye, Roman
Re:Interestingly rigorous (Score:4, Insightful)
Example:
I think you misunderstood me. It may not be a common problem, but it is a conceptual problem. The rounding has been improved lately, so it's not as easy to trigger with some simple busy loops. Peter's patches don't remove limit_wait_runtime() and AFAICT they can't, so I don't see how what you said can be correct.
I'm worried about how quickly you judged this issue, and that you haven't been more in contact with me discussing it. This issue is important to me, and I'd really like to work with you to get it resolved.
Re: (Score:3, Interesting)
(emphasis mine)
Very true, but I have this suspicion that some hacker's rudeness is intended to piss people off and keep the field, the spotlight, and the pressumed "glory" to themselves.
Sad thing is, it works a lot of the time, and you can always blame old
Re: (Score:3, Interesting)
People prefer verbal reasoning, even though all kinds of logical errors can slip in undetected, for the simple fact that they can read it at the speed of speech -- even if they really shouldn't.
This is PAINFULLY evident in the software world. I imagine even kernel developers tend to be lazy this way.
Math is only reliable up to a point (Score:4, Insightful)
Not that maths isn't useful, but much of the time it can't give you definitive answers for the questions you really want answers to, only somewhat related, simpler ones.
Re: (Score:2)
The Infintely Fair Scheduler of Solomon (Score:5, Funny)
Shit, I've just figured out why I'm a project manager.
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Otherwise known as the Heisenberg Uncertainty Scheduler.
The main problem with this is you can know which process is scheduled or which will be next, but not both. In fact, the act of scheduling would probably alter the scheduler itself.
This post (Score:4, Funny)
Automatically generated by:
Slashdot Predictive Post Scheduler v 2.12.02-16
Re: (Score:2)
More flame bait? (Score:5, Insightful)
The comments on the article at the linked-to site suggest that there are potentially flaws in the logic behind the Really Fair Scheduler, and that its author has ignored advancements in the CFS that make most (or all?) of its improvements irrelevent. Also there are many suggestions that the author of the Really Fair Scheduler, some guy named Roman something-or-other, is raging on the kernel lists rather than working cooperatively to improve the Linux scheduler.
Given what I have seen, I suspect that the Really Fair Scheduler is going nowhere, and that "derrida" knows that and is just trying to add more fuel to the flame-fire by posting about it on Slashdot.
Re: (Score:2)
I don't know who you are but in cases like this we need facts and not assumptions, not perceptions, not mild understandings of issues.
Re: (Score:2)
Re:More flame bait? (Score:5, Insightful)
In order to help give substance to the debate, Roman coded together some proof-of-concept stuff, but instead of his architectural ideas being looked at seriously and critically, Ingo instructs him to strip away most things and "well use it." That really should seem to everyone on the sidelines like Roman's ideas are being ignored without debate. Now, maybe Ingo is polite, Roman's work just sucks, and Ingo won't confront him on it. But if that's not the case, maybe there should be a (non-flamey) debate about the best architecture for the scheduler.
Re: (Score:3, Funny)
My name is Ingo Molnar, you kill -9ed my scheduler. Prepare to oops!
ingo's reply (Score:5, Informative)
Linux Kernel Whining List (Score:2, Funny)
Re: (Score:2)
Re: (Score:3, Interesting)
Re: (Score:2)
I actually think that its at the heart of why Linus has given Ingo the go-ahead to do the CFS scheduler, because ultimately the CFS and -rt scheduler will be one and the same, or CFS layered ontop of -rt. What this means is more usage of the vanilla kernel for embedded devices instead of the 'other' real time Linux derivatives such as RT Linux from FSM labs and the RTAI patch.
But where is the Linux IO Scheduler? (Score:5, Insightful)
"Now, as far as this bug being AMD64 only. We develop a portable data analysis
tool and we run it on Intel Core Mobile systems (Sony UX series, Panasonic
Toughbook series) and see this bug or one almost exactly like it on those
platforms as well.
"
http://bugzilla.kernel.org/show_bug.cgi?id=7372 [kernel.org]
http://bugzilla.kernel.org/show_bug.cgi?id=8636 [kernel.org]
http://www.nabble.com/IO-activity-brings-my-deskt
http://forums.gentoo.org/viewtopic-t-482731-start
At first, deadline IO was touted as an answer, but that doesn't completely fix things.
Some say Native Command Queueing is broken. One person claims deadline + NCQ disabled helps.
Some say the kernel's vfs_cache_pressure settings help, while others refute it (compare kernel bug report versus page 21 of the gentoo forum thread). But no one understands what's really broken in the kernel.
Can we please get Ingo working on IO scheduling? PLEASE?
Mod parent up (Score:3, Insightful)
Smarter write throttling is the answer (Score:5, Interesting)
This seems to be aggravated by a number of conditions listed in the links posted by the parent post, but it's also aggravated when using ext3 and ordered data journaling as well (which is the default on most systems).
There is some work being done to reduce the huge latency in reads that can occur during heavy write loads with the "per device dirty throttling" patchset. Initial results look very promising.
LWN article: Smarter write throttling [lwn.net]
per device dirty throttling -v8 [lwn.net]
This patch set seems to hold a lot of promise in being able to fix this problem, but I'm not sure what the latest status is or what kernel it will make it into. It could make it into 2.6.24 at the earliest.
Re: (Score:3, Informative)
huge improvement with per-device dirty throttling [lkml.org]
And the thread referencing the latest version of the patch posted to lkml:
per device dirty throttling -v9 [lkml.org]
mirror, mirror, on the RAID (Score:4, Funny)
Re: (Score:2)
Sausages (Score:5, Funny)
-- Otto von Bismarck (paraphrased)
User Driven Scheduler (Score:4, Funny)
Now for the important question (Score:3, Insightful)
Next week: (Score:4, Funny)
fair, unfair, deal with it (Score:2)
Suggestions for next iterations: Ass of a scheduler, bastard scheduler, unfair bully scheduler, depressed goth scheduler... (I will leave the exercise of figuring out the allocation semantics to reader)
Review feedback (Score:5, Informative)
Oh my gosh, the Linux scheduler is on Slashdot. Again! :-)
Frankly, this amount of interest in the Linux scheduler is certainly flattering to all of us Linux scheduler hackers, but there are certainly more important areas that need improvement: 3D support, the MM / IO schedulers, stability, compatibility, etc. (There's also the FreeBSD scheduler that went through a total rewrite recently - and it got not a single Slashdot article that i remember.)
But i digress. A couple of quick high-level points (most of the details can be found in the discussions on lkml):
I find the RFS submission interesting and useful, and i have asked the author to split the patch up a bit better, to separate the core idea from optimizations and unrelated changes - to ease review and merging of the changes, and to make the changes bisectable during QA after they have been applied to the mainstream kernel. (That is how patches are typically submitted to the Linux-kernel mailing list - it's a basic requirement before anything can be merged. CFS for example was applied to the 2.6.23 development tree in form of a series of 50 (!) separate patches. (And the scheduler works at every patching/bisection point.))
I also pointed him to the latest "bleeding edge" scheduler tree, which already implements the same non-normalized form of math and makes some of the rounding and performance arguments moot i believe. (lkml mail [iu.edu]).
There are some issues where i disagree with Roman at the moment: even when comparing to unmodified current upstream CFS, i think Roman makes too much out of rounding behavior and i have asked him to substantiate his claims with numbers (lkml mail) [iu.edu].
The current precision/rounding of CFS is better than one part in a million. (in fact it's currently even better than that, but i'm saying 1:1000000 here because we could in the future consciously decrease precision, if performance or simplicity arguments justify it.)
I can understand his desire towards creating interest in his patch, but IMO it should not be done by unfairly (pun unintended ;) trash-talking other people's code. The math code in CFS that achieves precision has gone through more than 5 complete rewrites already in the 20-plus CFS versions, and the current variant was not written by me but was largely authored by Thomas Gleixner and Peter Zijlstra.
New, better approaches are possible of course and the math is relatively easy to replace, due to the internal modularity of CFS. So we are keeping an open mind towards further improvements. (which includes the possibility of total replacements as well. Dozens of times has my own kernel code been replaced with new, better implementations in the past - and that includes large parts of the scheduler too. In fact only ~30% of current kernel/sched.c was authored by me, the rest has been written by the other 90+ scheduler contributors, according to the git-annotate output that covers the past ~2.5 years of kernel history. Beyond that numerous other people have contributed to the scheduler in the past.)
About the submitted code: it was a bit hard to review it because the new code did not contain any comments - it only included raw code - which is very uncommon for patches of such type. The email gave the theoretical background but there was little implementational detail in the patch itself connecting the theory to practice.
So to drive this issue forward i have today posted a question to Roman in form of a tiny patch [iu.edu] that extracts only his suggested new math from his patch and applies it to CFS. If it is indeed what Roman intended then we can analyze that in isolation and in more detail. The patch is as small as it gets:
Re: (Score:3, Informative)
It should be pointed out to all Kernel Hackers, the kernel is the product, not a place for their pet project unmodified. No offense to Roman. This part of the code is a bit beyond me, But your approach to his patches seems reasonable. I hope he follows up with the patches you requested. We all want a faster "Fair" scheduler.
Like many here, I was intr
Re: (Score:3, Insightful)
Re: (Score:2)
Re:Coming soon (Score:5, Funny)
Re: (Score:2)
What about the neocon scheduler? (Score:2, Funny)
This is a "great" way to run things and if it ever goes to a vote, I hope lkml ops can be convinced to go the diebold route.
Re:Coming soon (Score:5, Insightful)
Re:Coming soon (Score:4, Informative)
Re: (Score:2, Interesting)
I experimented with it, but not in depth. As far as I remember, ionice didn't help a lot compared to real mainframe I/O scheduler. I have always felt that Linux was weak on I/O scheduling and other posts tend to confirm what I suspect.
Now, if you tell me that I can do real I/O scheduling with ionice and that you have managed to accomplish that. I might give it a second try, more in depth this time.
Also, please specify kernel tweaking parameters to cause ionice to act as a real I/
Re: (Score:2)
It's not in the debian repositories so I get the feeling there is something wrong with it. (?)
Re:What about the really greedy scheduler... (Score:4, Funny)