ArsTechnica Explains O(1) Scheduler 295
geogeek writes "The recent release of Linux's 2.6 kernel introduces a number sweeping improvements. These can be hard to understand without a background in programming. This week's Linux.ars examines the improved scheduler for an enthusiast audience, concisely explaining its nuances and the practical effects of an O(1) design."
Ars' Piece (Score:5, Interesting)
Re:Ars' Piece (Score:5, Informative)
There is one exception to this load balancing rule -- some special processes may be fixed to a certain runqueue. This attribute is called "thread affinity;" a thread is simply another name for a process.
I think it over-simplified this a bit. Granted, Linux doesn't make all that much distinction between a "thread" and a "process" as compared to (say) Windows, but the distinction is still important.
Otherwise, it's a decent article, and gave me some insight to one of the major improvements in 2.6 (which I've yet to try out).
I remember when the "pre-empt" patch came out for 2.4, before it was integrated into the kernel. For a desktop (or otherwise interactive, like my media box) system, it really made a major perceived improvement in performance (while *actual* performance wasn't affected; it was just about priorities and scheduling, making it "feel" more responsive).
The O(1) scheduling algorithm likely improves this further, and is enough to tempt me to spend XMas evening upgrading... these things are important, especially on a desktop system.
Windows XP, for example, isn't all that great in this area, and often one process can too easily slow things down. This is even further emphasized when all the processes you're working with are actually the main "explorer.exe" process; eg, you do something in Windows Explorer that blocks (spin up a CD drive) and everything else (the "desktop", task bar, etc) all become unresponsive...
I enjoy seeing such improvements in the Linux kernel. The pre-empt patch (later integrated into the mainstream kernel) made a drastic usability improvement (especially when the box is also a web server, MySQL server, firewall/router, etc) and I couldn't see a Windows box handling similar tasks without becoming unusable (note: not unstable, just not practical to use when responsiveness is hindered).
I think the kernel team is on the right track with this, specifically for desktop applications (though it does help in many other applications as well; anything interactive, really, even over the network/Internet).
Re:Ars' Piece (Score:3, Insightful)
It doesn't just "feel" more responsive, it is more responsive! It's a matter of definition, but I'd certainly call that an improvement of performance. Taking from the article (or comp.sci classes), while the throughput remains equal - I suppose it's even reduc
Re:Ars' Piece (Score:2)
I agree, though what I meant to say was that it feels faster. It certainly is more responsive, but it's not any faster.
I recall when the pre-empt patch first came out, it was tested on a variety of systems. On a desktop-like environment it made a huge difference (responsiveness) but in a server environment, it made very little difference.
So yes, it certainly is more responsive to any interactive application, but for sheer CPU loads it didn't
Re:Ars' Piece (Score:2)
What you are probably trying to say by that, is that, while throughput is the same, latency has been decreased.
Re:Ars' Piece (Score:5, Interesting)
I actually saw this behavior once on a customer's machine. I closed the CD door, and the machine went unresponsive for several seconds. I was shocked. Then I realized: millions of people think this is normal. And those millions of people live in 2003, not 1983. It was like some sort of Lovecraftian revelation.
CDs bringing Windows to a stall (Score:2)
Yes, this is MAJORLY bad. I've had whole machines become totally unusable (mouse moves, that's about it) simply because of a dirty CD put in the drive, or a buggy drive (perhaps the mechanism is loose, etc). Windows wants to down the entire machine to devote all computational power to waiting for the CD drive! Linux is not off entirely scot-free, but it's defin
Re:GUI layer also has performance bottlenecks (Score:2)
Yes, but KDE being laggy won't impact Apache or ProFTPd running in the background. Put a CD in the drive, and *everything* on a Windows box slows to a crawl until the OS reads the CD.
Re:Ars' Piece (Score:5, Insightful)
The kernel hasn't slowed down in that case; it's the GUI layer which has slowed down. It hit a bottleneck.
Interestingly, it would appear that Windows has used an O(1) scheduler in Windows NT and up since at least NT 4.0 (which came out in 1996).
It's also interesting to note that it has the same priority-boosting for IO threads that Linux has.
"Windows NT also uses additional priority adjustments based on other events, such as momentarily boosting thread priority when it returns from an I/O call"
Detailed explanation of the NT4.0 scheduler [microsoft.com]
Easier to read version of the above [google.com]
Part 1 of a slightly more indepth (but in some places inaccurate)* article [winntmag.com]
Part 2 of the article [winntmag.com]
*The article makes it sound like it's O(N) in places; it's not. The scheduler is using a set of priority lists, which are kept in-order at insertion time when a thread exits its context, by the thread inserting itself at the end of the list for its priority.
Re:Ars' Piece (Score:2)
Re-read it. If you still don't get it, then re-read it. If you still don't get it...
(tig)
Re:Ars' Piece (Score:4, Informative)
Multi core proc's (Score:3, Insightful)
Re:Multi core proc's (Score:3, Insightful)
Re:Multi core proc's (Score:5, Informative)
Well, only if you want to run only as many threads as you have CPUs. The windows machine I'm using right now has 37 processes, some I'm sure with multiple threads. I suppose you could have a hardware scheduler, but I don't think it would be a good idea. Scheduling doesn't take much time (certainly not this O(1) jobby), so you would lose a lot of flexibility without a lot of gain.
Many programs are multithreading now... (Score:2)
Did a quick check here, 54 processes, 461 threads. Something tells me I won't see a computer with 4-500 CPU cores in my desktop, ever. Not to mention that if you did, I'd want some to be more important. I'm sure many of those threads are just sitting there waiting for input, e.g. GUI threads...
Kjella
Re:Multi core proc's (Score:3, Interesting)
Re:Multi core proc's (Score:2)
And yes, it would still require OS support, because the processor would have no idea when the current task is copmle
If every OS maker settled on a standard... (Score:2)
initramfs (Score:2, Interesting)
checking if image is initramfs...it isn't (no cpio magic); looks like an initrd
I couldn't google much info on it. Anybody know more about it? Or how I can stop the kernel checking for it.
Re:initramfs (Score:3, Interesting)
I couldn't google much info on it. Anybody know more about it?
The short answer is that the purpose of this message is to help people familiar with the kernel diagnose problems which prevent completion of bootup. Your system works, so ignore it. Preventing the message is more trouble than it's worth.
If you are interested, there is a little more info here [linux-mag.com].
Couldn't possibly have been moded off-tpoic for... (Score:3)
Could there possibly be a reason for an offtopic mod? Lets see:
-- Article/thread about an article about the linux scheduler.
-- Respondent asks a technical support question having nothing to do with the article referenced or the scheduler in general.
-- Respondent gets moded down for jumping off topic because this isn't a Linux support forum nor is it generally an audience who would want it to *become* a support forum.
-- Someone who doesn't lik
A more detailed description of O() notation... (Score:4, Informative)
Hmm. (Score:5, Interesting)
Instead the article was just "This is what O(1) is. This is what a scheduler is. This is what "preemptive" means. The 2.6 SMP can load-balance effectively across many processors. I used this really cool mp3 player this week." and just about nothing else.
Anyone want to explain exactly how it is that an O(1) scheduler is a difficult thing, and how exactly linux 2.6 achieves it?
-- Super Ugly Ultraman
Re:Hmm. (Score:5, Informative)
The original article was MUCH MUCH more in depth it was toned down because of the "glaze over" effect people were getting. I didn't have much say in the editorial process in the matter.
I think this way atleast it appeals to a broader audience, not just the CS audience who would know what a binary min heap is if I were to use it.
To each his own I suppose.
Re:Hmm. (Score:2, Interesting)
Would you be willing (or able) to post your original version of the article somewhere for interested parties to read?
Re:Hmm. (Score:2)
Re:Hmm. (Score:5, Informative)
Ah-ha! (Score:2, Insightful)
Question (Score:3, Interesting)
The mechanism for recalculation of timeslices in previous Linux kernel's was very simple. When every process had its timeslice completely depleted (they were all 0) the kernel would simply go through every process and recalculate its timeslice and start execution again at the highest priority runnable process. While this is the most obvious solution it is also very inefficient, executing in O(n) time.
Ok, its easy to see why this is O(n).
The 2.6 scheduler uses a simple yet effective method for getting
Re:Question (Score:5, Informative)
I think the majority of major improvements on the 2.6 kernel over 2.4 were placed for increasing system responsiveness.
(and yes, I'm using my Karma bonus)
Re:Question (Score:2)
How is this not O(n)? The time slice calculation still occurs for each process, just not all at once for all processes. Each process still gets its time slice calcuated, it is removed from one queue, and inserted into another. Is there some other unmentioned trick that eliminates the calculations? Or was there something else that made the 2.4 scheduler O(n), such as finding the highest priority process?
Well, the scheduler spends roughly the same amount of time scheduling a thread irrespective of the numb
Re:Question (Score:2)
In this case, the 1*O(n) = n*O(1) doesn't fit this example. If I underdstand everything correctl
Re:Question (Score:5, Informative)
Let's see if I can remember anything from my algorithms class.
How is this not O(n)?
Read on for an explanation. I do believe you gave the answer to your own question in your post. I will also assume that you understand "Big-O" notation well enough to follow the answer.
Each process still gets its time slice calcuated, it is removed from one queue, and inserted into another.
Correct. You've nailed it. Insertion into a Que operates in O(1) time. Removal from the Que is also a constant time algorithm. A Que inserts at the tail of the list (or the end of the array) and De-Ques from the Head of the list (or the beginning of the array). Ques do not permit searching, sorting or accessing the data after the first element.
If we De-Que a process, run it then calculate some information and Que it up that would run in O(c1*O(1)+c2*O(1)) = O(1).
This is still an oversimplification and does not take into account the possibility of a Priority Que in which insertion runs in O(lg n) and De-Queing runs in O(1). So a heap sort using a Priority Que runs in O(n*lg n).
Hope that helps.
Re:Question (Score:2, Informative)
Depends on whether you're measuring the time to select another process to run, or the overhead necessary to do correct scheduling. Timeslice recalculation must happen, that's true. But on 2.4, it needed to happen in conjunction with other information gathered from the task list. There was one queue:
Uneducated guess... (Score:3, Informative)
...but wouldnt' the fact that only one process is operated on at a time (i.e. a fixed amount of computing) as opposed to scanning the enitre list of processes (could be one...could be thousands) each time make the difference between O(n) and O(1)? Sure, things will be slower if you have 10,000 things to check as opposed to one, but at least here we're not checking 10,000 things for each and every one in the list...
(IANAKH)
Interesting (Score:2)
However there are some stones unturned
I wonder how different scheduler classes get scheduled.
And how does scheduler differentiate load balancing for processes running on same core (SMT) and on different processors.
It would be nice to have more technical article published somewhere...
Re:Hmm. (Score:2)
And it is only so if you consider the knowledge about schedulers to be the ultimate benchmark for smarts.
sidebars??? (Score:3, Insightful)
Couldn't they have linked out into more in depth treatments, and saved the complexity for the interested (technically) reader while saving 'readability' for the non-technical ppl?
Preemption defined incorrectly, I think (Score:5, Insightful)
Timeslices are used in order to implement preemption: a process stops either when it blocks (waiting for I/O) or otherwise voluntarily gives up the rest of its timeslice, or when it's timeslice is used up.
About the article in general: I'm surprised about the dumbing down. Other articles on Ars, including but not limited to the series about processor technologies (e.g. the one about hyperthreading) are much more thorough and detailed.
Re:Hmm. (Score:3, Informative)
Re:Hmm. (Score:5, Interesting)
By that I mean the 'default'/'main' article having the not-too-geeky/JoeSixPack-reads-slashdot version, AND includes a link to "for those who want the gory details".
That way the editors have an article which appeals to the vast majority of "reasonably intelligent" people, as well as catering to the hardcore techo-savvy crowd. (ie wrote a brainfuck program before breakfast)
Dumbing Back Up Again (Score:2)
I have a modest suggestion for you to relay to your editor. There's this really cool feature of HTML that lets you put in a 'link' to another document. If your 'dumbed-down' article would use one or more of these to let the more technically-minded pursue that detail, you'd have the best of both worlds - an Executive Summary for PHBs and the nitty-gritty look
Re:Hmm. (Score:2)
Good lord
Seriously, demand a reprint. They are destroying your career over there by making you look about as knowledgable as a
Re:Hmm. (Score:2)
Re:Hmm. (Score:2)
Right, and then you're surfing a linked list. Why not use a dynamic array? At least then you have a table of pointers (sorta) so you don't have to loop through the list.
The problem with looping through a list like you're doing is that that operation is O(n). I don't know exaclty the O notation for when you stop when you reach the task for which you're looking, though.
Re:Hmm. (Score:3, Insightful)
Well, they (afaik Ingo Molnar) created universal scheduler which should work properly on user workstation and on server, on uni- and multiprocessor systems with small latencies and without performane loss. It may be easy to create a model in your head, but proper implementation takes a lot of effort and keyword is 'proper'.
Re:Hmm. (Score:5, Informative)
Re:Hmm. (Score:2, Interesting)
Re:Hmm. (Score:5, Informative)
There are two queues used in the Ingo scheduler. One contains processes that are ready to run. The other contains processes that have already run. The run queue is scanned, and when a process is found that can be run, it is automatically run without scanning the rest of the queue. When the process finishes its time quantum, it is put into the second queue.
When the first queue (the active one) is empty, the queues are switched. This is just done by swapping the pointers to the queues.
This scheduler is O(1) because it runs the first process it can run without scanning the entire queue as an O(n) scheduler would do. That's why it's O(1).
If you want a bit more information on it, search for the "Ingo scheduler" on Google or your other search engine of choice. I don't see any detailed explanations of what happens, but I suspect as the 2.6 kernel goes into wider use, those will come.
Re:Hmm. (Score:5, Informative)
Your assumption is flawed, though. You seem to be assuming that only one process in the queue is runnable. We can't make that assumption that at any given time there's only one runnable process.
I believe you have a point about the worst case of the algorithm. That's not really relevant here, though. Consider the Quicksort algorithm. It has a nasty worst case - data that's already been sorted. As a matter of fact, the worst case is an O(n^2) and approaches the slowness that is the bubble sort. Despite this well-documented worst case, we say that quicksort is an O(n*log n) algorithm, which is a more typical case.
I believe the worst case of the Ingo scheduler is O(n). If only one process can be run, it indeed has to check n/2 processes on average, and therefore is O(n).
You raise a good question, and figuring out what order an algorithm is, is something I'm not particularly good at. Generally the average case is what matters, but I don't know how you determine what the average case is for the state of processes.
Anyone care to try to explain this?
Re:Hmm. (Score:3, Informative)
Re:Hmm. (Score:2)
Again, I'm not sure what the average case for the state of the queue would be, but that's not it.
Re:Hmm. (Score:2)
Simply for reading decscriptions here, it seems to me that this is the exact problem that O(1) solves; as each process is given its time slice, it is moved into the "already did that" list...therefore the kernel will never find a process in the list that hasn't run yet, it has already been moved out of that list and therefore is out of mind until the lists are swapped. The only challenge then is sorting based on priority, which at this point in my post-Christmas drunken-ness is probably beyond my comprehens
Look under "Timeslices and niceness" (Score:2)
"The scheduler keeps track of all these timeslices with two lists of processes. The first list contains processes that have yet to use their timeslice; the second contains those processes which have used their time. When a process uses its timeslice, the scheduler calculates a new timeslice (...). The process then gets inserted in the second list. When the first list becomes empty, the second list takes the place of the first, and vice-versa."
That'
Re:Hmm. (Score:2)
The article appears to say that 2.6 divides processes into a fixed number of priority levels, rather than having what could in theory be a large number of possible priorities--and that's what lets you use constant time methods.
Once you set that fixed number, you can do things like keep an array of lists subscripted by priority level and then a single value whose bi
Re:Hmm. (Score:2)
Recently I have seen a lot of Dodge(TM) commercials where they say something to the effect of "it's got a Hemi". I presume it is not supposed to be some sort of self-parody, which leads me to think they must have some evidence that some people will somehow think if "it's got a Hemi", that must be something cool. Like "Intel Inside" or "hyperthreading". From a technical perspective, I do not think there is anything pa
Re:Hmm. (Score:2)
Hmm, it's like this:
O(1) Linux Scheduler : Linux 2.4::Hemi : Dodge's old standard issue V8.
The Hemi is a design that has all kinds of horsepower compared to a regular V8 and tends to meet emission standards and stuff. So you really do get more power with a Hemi. Therefore your comparison is quite valid. :) And your conclusion of "don't care" is still just as valid. You just missed the "act on conclusion part", which is to go away.
Yay for Ars (Score:4, Interesting)
Re:Yay for Ars (Score:2)
Interesting (Score:5, Informative)
A question if anyone can answer (Score:3, Insightful)
Re:A question if anyone can answer (Score:2, Informative)
There are 140 possible priorities. Per processor. Each priority has its own pair of queues -- runnable-and-not-yet-run (active), and runnable-but-waiting-for-more-time (expired). (If a process isn't runnable, it doesn't even appear in the run queue; already an improvement over 2.4). In the 2.4 scheduler, the system task list WAS the runqueue list, and you had to inspect i
Re:A question if anyone can answer (Score:2)
Ars got it slightly wrong about O(1)... (Score:5, Informative)
Re:Ars got it slightly wrong about O(1)... (Score:2, Insightful)
Time_To_Switch_n_proces(p)< k where p>np
that is not exactly the same as your definition.
For computing time.. (Score:2)
Personally, I'd think that the difference between constant and constant time disappeared in the "dumbing down". Once you start talking about "being bounded in the limit by a constant", most people won
Re:For computing time.. (Score:2)
Umm, how about the O(1) scheduler presented in the article. When the queues are empty, there is the extra time of switching them. So if
n is the time to pick a process from a nonempty queue
and
p is the time to switch the queues
Then time n + p is bound, but frequently the scheduler takes only time n.
Re:For computing time.. (Score:2)
How is this different from the BSD/VMS schedulers? (Score:2, Interesting)
Linus Didn't steal it from SCO. (Score:3, Funny)
Technically, this is not O(1) (Score:2, Interesting)
When it comes time to select a process, the scheduler finds the highest priority level containing available processes. It then selects a process from the desired priority level. Because the number of priority levels is fixed, the scheduler always takes a constant amount of time.
Yes, but it does need to inspect all the processes within a priority level, right??
As the total number of processes grows, the average number of processes within a priority level also grows
Re:Well... (Score:2)
Note though that there's an advantage to the linux scheduler method over a priority queue: although both can access the (joint) highest priority process in O(1) time, it generally takes O(ln n) time to insert/delete a process in a priority queue, whereas the linux scheduler technique can do this in O(1) time as well.
UNIX catches up with mainframes (Score:5, Informative)
Mainframe schedulers have been O(1) for a long time. The one for UNIVAC 1108 EXEC 8 was O(1) in 1967. (And you can still buy a Unisys ClearPath server with that code in it.)
The traditional implementation is a set of FIFO queues, one for each priority. For systems with only a few priorities, that's just fine. As the number of different priorities increases, you spend too much time checking the empty queues, so there are tree-like structures to get around that problem.
Scheduling today is more complex because of cache issues. If you context-switch too much, you thrash in the CPU caches. On the other hand, everybody has so much memory today that paging is mostly unnecessary.
Sounds like an RPG (Score:5, Funny)
Sounds like the rules to an AD&D adventure.
Re:Sounds like an RPG (Score:2)
A real explanation (Score:2)
Preemption and disk requests - educate me (Score:5, Insightful)
The benefits of preemption are great enough that the kernel developers decided to make the kernel itself preemptible. This allows a kernel task such as disk I/O to be preempted, for instance, by a keyboard event. This allows the system to be more responsive to the user's demands. The 2.6 kernel is able to efficiently manage its own tasks in addition to user processes.
One of the most time-wasting, counterproductive things for me on Windows, and to a lesser extent on Linux, is that background tasks can essentially take over the computer because of disk thrashing. Sometimes (on Windows XP) I've seen it literally take minutes to bring a window to the front and repaint it while the disk is thrashing doing god-knows-what in the background. It is impossible to get any serious work done when that is happening. Setting my process to highest priority (on Windows) doesn't seem to help when disk thrashing is involved. I can understand how it might take a few seconds to swap another process out and bring mine in, but not minutes.
Now I don't know exactly what's happening behind the scenes, but as soon as I click on a window to bring it in focus, I want everything on the computer to be devoted to doing just that until its done, regardless of what disk blocks from other processes are in the queue. My disk block requests should go in front of all of the others unconditionally. In other words I should see essentially no more latency than if the computer were completely idle in the background, other than just the minimum delay required to bring it back into memory if it's swapped out. For all I care just completely freeze all other processes on the computer until my request is finished, or at least give me the option to specify that it be done that way. My productivity should override everything else on my own desktop/workstation computer.
So, educate me about what I may be overlooking here, and will 2.6 help this? I mean for Linux of course, not Windows, which may be a lost cause (:
Re:Preemption and disk requests - educate me (Score:2)
Re:Preemption and disk requests - educate me (Score:3, Informative)
The application wasn't being used and so its memory was swapped to disk, now that memory is needed again so it needs to be brought back. But before it is brought back some space needs to be cleared which means writing some existing data from memory to disk (which is a slow process).
The solution is simply to have enough memory installed for your working set of applications (which may mean installing more memory, using less applicatio
Re:Preemption and disk requests - educate me (Score:2)
Re:Preemption and disk requests - educate me (Score:5, Informative)
This usually happens when a process is in a loop, and repeatedly uses the same memory blocks, but can't keep them all in physical memory. Thus, it has to replace the pages every time it accesses/modifies them.
Even if a process is at a low priority...if it comes on the CPU, it has the potential to trash memory and slow the works down.
If an application is trashing, then you might be able to get away with scheduling it to not hit the disk. If the OS is trashing, you are SOL.
Re:Preemption and disk requests - educate me (Score:2)
Exactly, this is the problem. I want the ability the force all (non-critical, non-realtime) processes not associated with whatever window is currently in focus to freeze completely, until the process associated with the window in focus is idle, waiting for neither CPU nor disk. Or until the user brings another window into focus, in which case this behavior would immediately shift
Re:Preemption and disk requests - educate me (Score:5, Informative)
In your scheme, the low priority process will never get a chance to run, because the high priority process is running all the time. So the high priority process keeps running forever doing nothing useful.
So if your email client was checking mail in the background, its connection will time out, file sharing will completely die, background printing fails due to communication time-outs,Re:Preemption and disk requests - educate me (Score:2)
These kind of things were possible in older systems (e.g. Mac OS 9 and below froze every other application while you were dragging a window around), but modern operating systems cannot work that way anymore.
It is too bad they weren't designed differently, so that they could work this way. With only a few realtime exceptions in a desktop machine - MP3 players, CDROM burners (which I think are flawed this way; maybe in the future RAM will be so cheap the necessary minimum data can be pre-buffered locally
Re:Preemption and disk requests - educate me (Score:2)
Your proposal appears badly
Re:Preemption and disk requests - educate me (Score:2, Interesting)
Especially in Windows, it seems like if a process is hammering the disk in the background, other processes can be starved from accessing the disk. Windows seems especially braindead at times when multiple processes fight over the disk, and the disk is
Now equivalent or better than BeOS for latency? (Score:2)
Is the new kernel going to reach/exceed this level of performance for the desktop? This would be a big thing for the corpo
Re:Now equivalent or better than BeOS for latency? (Score:2)
And if you think Linux is bad when it's thrashing, try running BeOS with insufficient memory. You'll wish you were back in Windows 3.11 on a 486/25.
This is not a troll, but please explain... (Score:2)
Has
Someone explain the 2.4 scheduler, then? (Score:2)
What did it used to do?
Good programmers should know their Big-O... (Score:3, Interesting)
Although the article was about scheduling, I would have like to have seen some more text about Big-O notation and why it is important to programmers. To me, any good programmer should have a good concept of Big-O notation, how it relates to traversing data structures (such as trees), et cetera.
The article did not make a point as to why good schedulers use priority queues. Which is simple, really. All priority queues are O(1) for getting the next item in the queue. It has nothing to do with scheduling, but it is the nature of the priority queue itself... as a data structure.
I would have liked to have seen the article in its whole form. I wonder if it went into more depth about Big-O, et cetera.
explorer.exe == bad design outside of the kernel (Score:5, Informative)
Unfortunately, even the best kernel process scheduler in the known universe would not fix this design flaw in Microsoft Windows, because what you're seeing is not a thread/process scheduling problem.
As you correctly observed, many Windows programs are forced to make "shell calls" -- which doesn't mean the same thing it does in Unix, where you fork a
The reason it bogs down when it's busy is because it is waiting on a single event queue. Mounting/unmounting of media, network lag, other processes sending/receiving messages, etc. all give EXPLORER.EXE more events to wait on. That's why it's a bottleneck, even when there's plenty of CPU to go around.
It's an awful design, and it's one of those things that's fundamentally broken about Windows.
Re:Not a very INSIGHTFUL article. (Score:2)
Big-O notation is not intuitive.
Linux.ars is a generalized Linux publication.
Because Big-0 notation is unintuitive, and indeed because the function of a scheduler is not known to most, Ars made an effort to explain it. If you already knew the information it provided, good for you! At least you have intelligence in some areas of your existence.
Because Linux.ars is a generalized publication, they also revi
I use Mac and I grok O(1) schedules, and kernels! (Score:2)
Re:English 101 (Score:3, Funny)
Re:Deep algorithm analysis? (Score:2, Informative)
To achieve O(1) you usually have to do some tricks that will get you quite a large constant.
I dont think that is necessarily true. You can (as they are doing here) trade memory use for speed by enumerating a bucket for every possible option and having a method to decide which bucket to look in. The memory requirements change for the number of options but the speed of the decision stays at a (low constant) O(1).
In general, its a matter of building clever, info-rich lookup keys and organizing the data in
Re:Deep algorithm analysis? (Score:2)
So, even with a damn good O(log n) scheduler, an O(1) scheduler can afford to have a constant factor 7 times worse than a O(log n). So, I don't think this is much an issue here since n is usually pretty large.
Of course, for any desktop application, the diff a factor of 7 is something like 1e-7 versus 7e-7 load. I would imagine this is only important on a big, busy MP box.
Horse Costume (Score:2)
That sounds like they are filling parts in a large horse costume.
Re:Trade offs? (Score:2)
The Linux O(1) scheduler does this. It implements a queue for each of 140 different priority levels, and then using an O(1) technique to find the queue with the highest priority that still contains a process waiting
Re:What was the O value od the 2.4 scheduler? (Score:3, Insightful)
Without explaining the O() of the 2.4 kernel, what good is an article explaining that 2.6 has a O(1)?