Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Linux 2.2 and 2.4 VM Systems Compared 225

Derek Glidden writes "I got sick of trying to figure out from other people's reports whether or not the 2.4 kernel VM system was broken or not, so I decided to run my own tests, write them up and post them online. The short conclusion is that the 2.4 VM rocks when compared with 2.2, but there's more to it than just that."
This discussion has been archived. No new comments can be posted.

Linux 2.2 and 2.4 VM Systems Compared

Comments Filter:
  • by b-side.org ( 533194 ) <bside@b-side.oTOKYOrg minus city> on Friday November 02, 2001 @03:06PM (#2513304) Homepage
    it goes like this -

    the 2.2 VM 'feels' better for single-user use (which i disagree with) but falls down under 'heavy' load. (which, as i've pushed 2.2 to load avgs above 250, i also disagree with)

    but, anyways, that's what he's saying. i found 2.4 to be much nicer in the one userland task that frequently shows off the VM - mp3 decoding under load. 2.4 never, ever skips, 2.2, with or without ESD, skipped frequently.

    YMMGV.
  • Re:Which 2.4 VM ???? (Score:5, Informative)

    by Alan Cox ( 27532 ) on Friday November 02, 2001 @03:06PM (#2513305) Homepage
    As of 2.4.14pre the Andrea/Marcelo VM is definitely doing better on most workloads. Where the Riel one has advantages the right way is going to be to build on the Andrea VM.

    The 2.4.14pre VM also seems to be taking the harder brutalisation test sets pretty well - something the Andrea VM didnt do in 2.4.10-13
  • by Azog ( 20907 ) on Friday November 02, 2001 @03:08PM (#2513313) Homepage
    If anyone out there has been having problems with 2.4 vm's (and there have been some problems) you should give 2.4.14-pre7 a try. Things have been moving fast on this front for a while now, but Linus thinks it's pretty much there now.

    In his words, "In fact, I'd _really_ like to know of any VM loads that show bad behaviour. If you have a pet peeve about the VM, now is the time to speak up. Because otherwise I think I'm done.

    This is an experimental patch to 2.4.13, and you shouldn't run it on an important machine, but the VM by all accounts is much improved.

    Even Alan Cox (who has been maintaining the older Rik Van Riel version of the VM in his -ac patches) agrees that the new VM is faster and simpler, and he plans to switch to it as soon as it is reliable enough to pass his stress testing. (which should be really soon, it seems.)

    (Yes, I spend an hour a day reading the kernel mailing list.)
  • Re:BSoD (Score:1, Informative)

    by Anonymous Coward on Friday November 02, 2001 @03:09PM (#2513319)
    BSODs happen right away. They don't put you through that kind of torture. Also, they give you a nice stack trace for informational purposes.

    If that had been a coredump, it probably could have been helpful, but since they don't even have a kernel debugger for Linux yet, these kinds of occurrences are just brushed away as "random occurance"s.
  • by haruharaharu ( 443975 ) on Friday November 02, 2001 @03:19PM (#2513375) Homepage

    Heck, know what would be the best? A pluggable kernel system, where anyone could switch WM.

    That's been suggested for Linux before, and the general feeling was that that would be so complicated (the memory manager changes touched most files in the kernel) and hard to test, that it would basically be a nightmare.

  • by Anonymous Coward on Friday November 02, 2001 @03:20PM (#2513376)
    I don't think most people really thought the 2.4 VM was a worse performer than 2.2, especially under normal load, and in recent kernels even under high loads.

    However, one thing that was not evaluated in this writeup at all was stability, especially on big boxes (as in SMP and >1GB) and heavy workloads. This is where neither VM really seems to be able to hang in there.

    I admin seven such boxes that all have 2 or 4 CPUs, and 2 of 4GB of RAM, and during heavy jobs get hit pretty hard. These things run 100% rock solid with 2.2.19, I've achieved uptimes of greated than six months on all boxes simultaneaously. Basically, reboots are for kernel upgraded, nothing more.

    With 2.4.x, I'm happy to get a few weeks, and sometimes much less. The machine practically always dies during heavy VM load. It has kept me from upgrading to 2.4 for several months now.

    The real kicker, when 2.4 is running correctly, my jobs run as much as 35-50% faster than with 2.2, especially on the 4 CPU server, so I really wish the VM was stable enough to allow me to move.

    Anyway, I'm sure it will get there sometime.

    BTW, before people write about how they're 2.4 boxes have super long uptimes let me say that I too have some 2.4 based systems that have been up since 2.4.2 was released, but these machines are either single CPU, or SMP but with 512MB of RAM. 2.4 seems to run quite well in this case.
  • by Anonymous Coward on Friday November 02, 2001 @03:24PM (#2513394)
    Making a VM subsystem is easy enough. Making a very high performance one that works well in as many cases as possible is not so easy - most OSes have a myriad of tweakable parameters (including Linux /proc/ files and mysterious NT registry keys, for example) to handle all the different special cases - but it's still a bit of a black art, since bizarre things like what sectors on the HD hold the swapped-out memory can make a big difference (personally I have a separate swap harddrive, but that's because I'm running nasty finite element analysis problems).

    Also, the VM underlies a host of other bits of the OS, and as they change, so the VM has to change to accomodate them - for example, Linux's zero-copy unix domain sockets, or Linux's VFS layer.

    In short, no, VM design is not 100% solved.

  • by Carnage4Life ( 106069 ) on Friday November 02, 2001 @03:37PM (#2513449) Homepage Journal
    so why does linux have 1 VM? it seems that 2 of them exist, and the BSD's have more... guys, "gimme a hunk" and "page fault" aren't exactly rocket science anymore, particularly with hardware support... the fact that there is room to make a big deal out of this is the problem, not the VMs.

    If Linux was a microkernel I'm pretty sure this would be possible but from what I've seen of the Linux kernel code and from some discussions on the linux kernel mailing list [zork.net], the virtual memory code is too entrenched in various parts of the code to be #ifdefed around with any sort of ease.
  • Re:BSoD (Score:2, Informative)

    by sbrown123 ( 229895 ) on Friday November 02, 2001 @03:41PM (#2513470) Homepage
    Actually, its probably something very simple: EnergySaver. Computer went into sleep mode which I have seen lock Linux up before.
  • We know (Score:3, Informative)

    by wiredog ( 43288 ) on Friday November 02, 2001 @03:42PM (#2513478) Journal
    The link is in this article. [slashdot.org]
  • 2.4.13 VM (Score:5, Informative)

    by sfe_software ( 220870 ) on Friday November 02, 2001 @03:57PM (#2513567) Homepage
    I can't speak for the differences between the two VM layers in the most recent versions of each, but I went from 2.4.7-2 (RH Roswell Beta stock kernel) to 2.4.13 (+ext3 patch), and I've noticed a serious improvement.

    My notebook has 192 megs and 256 meg swap partition. I run Mozilla constantly (which seems to constantly grow in memory usage as the days pass). Prior to the upgrade (2.4.7-2, recompiled without the debugging options RH had on by default), swapping was ungodly slow. Switching between Mozilla and an xterm would literally take a few seconds waiting for the window to draw on the screen. Even switching between tabs in Moz was slow.

    Since going to 2.4.13 with ext3 patch, I've noticed a serious improvement in this behavior. Under the same conditions (between 20 and 50 megs swap usage), switching between windows is quite fast. I don't know if it's faster at swapping per se, or if it's just swapping different things (eg, more intelligently deciding what to swap out), but for me it "seems" much faster for day-to-day usage.

    I haven't yet tested in a server environment... but for desktop usage, 2.4.13 rocks. Can't wait for 2.4.14, to see if any noticable improvements are added...

    Though it will be a non-issue once I add another 128 megs to this machine, it's nice to see such great VM performance under (relatively) low memory conditions.
  • by Arrgh ( 9406 ) on Friday November 02, 2001 @04:29PM (#2513789) Homepage Journal
    <niggle>Actually, 2^64 bytes is 1.84467E+19 bytes, or approx. 18.4 exabytes</niggle>
  • Re:What a load of BS (Score:3, Informative)

    by sfe_software ( 220870 ) on Friday November 02, 2001 @04:29PM (#2513792) Homepage
    I didn't want to get too detailed, but I always have quite a lot of things running. 100 Moz windows? Not quite, but I typically keep anywhere between 5 and 20 open at a given time...

    And, anyone who uses Mozilla constantly knows that it doesn't seem to free memory, ever... it grows and grows. The "tabs" feature is great (so technically it's just the one "window" open) but unfortunately closing a tab does not free any memory. I rarely restart Moz because I'd have to then re-open all the pages I had going, etc...

    Trust me, run Mozilla for a few days straight, you'll see quite a bit of memory usage (it's at 80 megs right now).

    Then there's LICQ (11 megs for such a tiny lil program), KMail (10 megs), 5 terminals, BitchX, Nautilis (using 12 megs, I suppose just showing the desktop)... the list goes on.

    So yes, I'm typically using the full 192 megs plus a bit of swap after running for a while, and the new kernel has, IMHO, improved performance under these conditions.
  • by Mr. Fred Smoothie ( 302446 ) on Friday November 02, 2001 @04:56PM (#2513995)
    Actually, the real debate on LKML was not whether something drastic needed to be done about the poor performance of the early 2.4 VMs, but *when* that should occur.

    Basically, the people who sided with Linus/Andrea were of the opinion that "things are so bad now [which was between 2.4.5 and 2.4.9] that a complete replacement of the VM even in a 'stable' kernel series is justified", and those who sided with Alan Cox/Ben La Haise/Rik van Riel thought that the existing VM code could be massaged and tweaked enough so that the performance would become acceptable and huge changes could be postponed until 2.5 opened.

    This was complicated by the fact that between 2.4.5 and 2.4.9, the -ac series had accepted patches from Rik which weren't applied in the Linus branch and did in fact seem to be fairly successful in increasing performance through much less intrusive code changes. This was one of the main complaints of the Alan/Ben/Rik contingent; that the problems had already been largely resolved in the -ac tree, and that that approach should have been applied in Linus' tree before jumping to a complete rewrite.

    At this point, a consensus seems to be forming that the Andrea VM is *much* simpler, the changes haven't had much adverse effect on other subsystems, and the performance is just as good or better than the VM in the -ac series.

    The question of whether or not it should have waited until 2.5 is one that will probably never be answered to everyone's satisfaction, but at least will soon be academic.

  • by Anonymous Coward on Friday November 02, 2001 @05:00PM (#2514023)
    Linux Kernel v2.4.13 High Memory Support

    CONFIG_NOHIGHMEM:

    Linux can use up to 64 Gigabytes of physical memory on x86 systems. However, the address space of 32-bit x86 processors is only 4 Gigabytes large. That means that, if you have a large amount of physical memory, not all of it can be permanently mapped by the kernel. The physical memory that's not permanently mapped is called high memory. If more than 4 Gigabytes is used then answer 64GB here. This selection turns Intel PAE (Physical Address Extension) mode on. PAE implements 3-level paging on IA32 processors. PAE is fully supported by Linux, PAE mode is implemented on all recent Intel processors (Pentium Pro and better). NOTE: If you say 64GB here, then the kernel will not boot on CPUs that don't support PAE.
  • Not that simple (Score:2, Informative)

    by Mr. Fred Smoothie ( 302446 ) on Friday November 02, 2001 @05:12PM (#2514101)
    That's interesting. I'm operating on my simplistic, naive notion that a VM is "the hard drive, where you dump pages when you're short on RAM or they get really stale". Thus, in my simple little world, the VM subsystem is affected the most by tweaks to the scheduler that swaps out pages. Is that where the major differences between the two VM schemes lie?
    Actually, the VM is "the subsystem which keeps you from getting short on RAM, by dumping pages to the hard drive when they get stale, while not swapping unnecessarily because of the big impact that disk I/O has on system performance."
  • tcsh time variable (Score:4, Informative)

    by brer_rabbit ( 195413 ) on Friday November 02, 2001 @05:37PM (#2514209) Journal
    I don't know about other shells, but tcsh has some features that provide other useless statistics. You can set a variable called "time" that can provide additional information. From the tcsh man page [edited]:

    time: If set to a number, then the time builtin (q.v.) executes automatically after each command which takes more than that many CPU seconds. If there is a second word, it is used as a format string for the output of the time builtin. (u) The following sequences may be used in the format string:

    %U The time the process spent in user mode in cpu seconds.
    %S The time the process spent in kernel mode in cpu seconds.
    %E The elapsed (wall clock) time in seconds.
    %P The CPU percentage computed as (%U + %S) / %E.
    %W Number of times the process was swapped.
    %X The average amount in (shared) text space used in Kbytes.
    %D The average amount in (unshared) data/stack space used in Kbytes.
    %K The total space used (%X + %D) in Kbytes.
    %M The maximum memory the process had in use at any time in Kbytes.
    %F The number of major page faults (page needed to be brought from disk).
    %R The number of minor page faults.

    Particularly, if you could measure the number of swaps/page faults in
    the different kernels it would be pretty useful. I've got $time set
    to:
    # have time report verbose useless statistics
    set time= ( 30 "%Uuser %Skernel %Eelapsed %Wswap %Xtxt %Ddata %Ktotal %Mmax %Fmajpf %Rminpf %I+%Oio" )
  • by Azog ( 20907 ) on Friday November 02, 2001 @06:13PM (#2514375) Homepage
    No, it isn't a "solved" problem. And the Linux VM subsystem is a surprisingly good one.

    Remember that benchmarking Linux against other OS'es back in the 2.2 kernel days showed that Linux was at least in the same ballpark as the best BSD and Microsoft OS'es, and the 2.4 kernels are even faster.

    Of course there are lots of well known algorithms and approaches - take an advanced computer science operating systems course to find out - but it's a really difficult problem and it changes all the time, because hardware and user level software changes all the time. It's a combination of an art and a science. Many, many things have to be balanced against each other, hopefully using self-tuning systems.

    An excellent VM for running one workload (say, a database) might suck horribly when running a different workload (like a huge multiprocess scientific computation).

    Here are some of the things that make VM complicated. Consider how other operating systems deal with these:

    - Virtual Memory. Many applications allocate far more memory than they ever use. People expect this to work. So almost all VM's allow programs to allocate much more memory than is actually available, even when including swap. That makes the next point more tricky:

    - Out Of Memory. What should happen when a system runs out of memory? How do you detect when you are out of memory? If you are going to start killing processes when the system runs out of memory, what process should be chosen to die?

    - Multiprocessors. List of memory pages need to be accessed safely by multiple processors at the same time. And this needs to happen quickly, even on systems with 64 or more processors.

    - Portability. The Linux VM runs on everything from 486'es with 8 MB of RAM and 100 MB of swap to 16-processor, 64 GB RAM RISC systems to IBM 390 mainframes. These systems tend to have different hardware support - the details of the hardware TLB's, MMUs, CPU cache layout, CPU cache coherency... it's amazing how portable Linux is.

    - Interaction of the VM with file systems. File systems use a lot of virtual memory, for buffering and cacheing. These parts of the system need to communicate with eachother and work together well to maximize performance. Linux supports a lot of filesystems and this gets complicated. For example, you may want to discard buffered file data while keeping metadata in memory when available memory is low.

    - Swap. When should a system start swapping out? How hard should it try to swap out? What algorithms should be used to determine what pages should be swapped out? When swapping in, how much read-ahead should you do? Read ahead on swap-in might speed things up, but not if you are short on memory and end up discarding other pages...

    - Accounting for memory usage is complicated by (among other things) memory-mapped files, memory shared between multiple processes, memory being used as buffers, and memory "locked" in to be non-swappable.

    - Keeping track of the state of each page of memory - is it dirty (modified)? Anonymous? Buffer? Discardable? Zeroed out for reuse? Shared? Locked? Some combination of the above?

    - Even worse: memory zones. On some multiprocessor systems, each processor may be able to access all the memory, but some (local RAM) may be reachable faster than others. The VM system should keep track of this and try to use faster memory when possible - but how do you balance that when the fast local RAM is getting full?

    - Interactions with networking and other drivers. Sometimes drivers need to allocate memory to do what they do. This can get very tricky when the system is low on memory. What if you need to allocate memory in a filesystem driver in order to write out data to the disk to make space because you are running out of memory? Meanwhile network packets are arriving and you need to allocate memory to store them. Sometimes hardware devices need to have multiple contiguous pages allocated for doing DMA, but if space is tight it can be very hard to find contiguous blocks of free memory.

    I'm not an expert on VM's either, but I've taken courses on operating system design and I read the kernel mailing list --- it is a hard, hard problem to make a fast, reliable, portable, feature-rich system.

  • Re:Not that simple (Score:2, Informative)

    by slamb ( 119285 ) on Friday November 02, 2001 @06:35PM (#2514467) Homepage

    Actually, the VM is "the subsystem which keeps you from getting short on RAM, by dumping pages to the hard drive when they get stale, while not swapping unnecessarily because of the big impact that disk I/O has on system performance."

    It's not that simple, either ;)

    It does everything you said but also tries to minimize disk I/O by caching parts of the disk in memory. It has to maintain a balance between maximizing the cache and minimizing swap usage. I believe recently they've also talked about doing quite a bit more lookahead on the cache...if you're accessing one disk block/page/whatever, grabbing subsequent ones as well. (I'm not sure if this is the next block of the physical disk or the file, but that's not the point.) That would be an additional complication.

  • by Paul Jakma ( 2677 ) on Friday November 02, 2001 @06:45PM (#2514514) Homepage Journal
    I'll have a stab at this one... not all the details might be correct, but it should be close enough to get the idea..

    VM is virtual memory. really in this context it should be: VMM, ie Virtual Memory Management.

    VM refers to the fact that on modern processors memory addresses used by processes do not refer to the physical location. Rather the address is a virtual address, and the processor translates it by some means to the physical address.

    Eg, if a process accesses memory at 0xfe12a201, the physical memory accessed might be 0x0000c445. The former address is a 'virtual' address, the latter is physical.

    Typically:

    Processors work with memory in discrete chunks called pages. A page might be 4KB of memory (eg on intel), or some other value. Each page has a number, a frame number (PFN), that identifies it. The part of the processor that deals with handling virtual memory is the Memory Management Unit (MMU). The MMU and operating system together maintain a set of tables that describe which pages correspond to which virtual memory addresses. These tables are known as "Page Tables" or "PT", each entry in a page table is a "Page Table Entry" or "PTE". A page table is usually held within one or more set of physical pages. Each process has it's own set of page tables. The MMU interprets a part of the virtual address as an index:offset into the page tables. By looking up the PTE at offset x in the PG indexed by y, the MMU can determine which physical memory address corresponds to a virtual address (and more besides).

    eg:

    process accesses memory at 0xfe12a201.

    MMU interprets 0xfe12a as the index, and retrieves the 0xfe entry (PTE) in the page table, which tells it which PFN the virtual address refers to. it then uses 0x201 as the offset into that page and fetches/operates on the memory located there.

    ie:

    - virtual address -> split into index and offset.
    - index gives you the PTE.
    - the PTE holds the frame number of the physical page (and some other stuff)
    - the offset is the location within the frame

    So everytime, (well nearly everytime), a process accesses memory, the MMU translates the virtual address in the above way. To speed things the MMU maintains a cache of translations in a unit known as the Translation Lookaside Buffer (TLB), which holds recent translations. If the MMU finds a translation there, it doesn't need to do the full lookup process.

    so where does the operating system, or rather it's VMM, come in? well, a MMU might find that when it goes to look up an index, that no valid PT or PTE exists. This might indicate the process is trying to access memory that it hasnt been allocated, the MMU would then raise a fault and switch control to the operating systems VMM code, which would probably decide to end the process with a memory access violation, eg SEGV under Unix, and perhaps dump the processes memory to a file to aid debugging. (a core dump.)

    also, the PTE holds more than just the frame number. There are various extra bits which the MMU and operating system can use to indicate the status of a page.

    Eg, one bit may indicate whether the page is valid or not. an OS'es VMM could use this to make sure that the MMU faults control to the VMM next time the page is accessed, perhaps to allow the VMM to read the page from disk into memory (ie swap).

    Other bits may indicate permission. Eg whether a page may be read or written to. This can facilate shared libraries by allowing an OS to map the same physical pages into the page tables of several different processes. Also facilites copy on write, for optimising fork().

    The cpu's MMU may maintain an 'accessed' bit and a 'written' bit to indicate whether a page has been accessed/written to since the last time the bit was cleared, so that the operating system can do bookkeeping and make informed decisions about memory activity.

    etc.. etc..

    The VMM's job beyond interacting closely with the MMU is to juggle which pages are kept in memory and which are swapped out to disk. perhaps if the OS does paged buffering, the VMM may also need to decide which buffer pages need to be written to disk (or read in pages to buffers). there are many ways it could do this, eg by maintaining lists of how and how often pages are used and make decisions about what to write out/read in based on that.

    it is in these intricate details that the various 2.4 VMMs differ.

    NB: details above are very architecture specific. different processors will have different page table layouts, different PTE status bits, etc.. eg on intel the virtual address is actually

    directory index : page table index: offset
    11 bits : 9 bits : 12 bits

    the directory index is an index to a directory of page table numbers, which saves on the amount of memory you need to hold a page table. the upshot being that the fine details of how paging works are processor specific.

    More NB's:

    page tables are process specific, switching between processes usually requires loading in the (set of) page tables of the new process. it also requires clearing out/invalidating all the existing TLB entries. this all takes time.

    Intel have an extension to their paged addressing, PAE, which allows for 36 bit physical addresses. I'm not sure, but i think it does this by splitting the directory index into 2 indexes, and increasing the PTE size to 36bits. (uses 64bits of memory though.)

    finally... there is plenty of reference materical on the web, so research for yourself cause i'm probably wrong in a lot of places. ah well.. :)
  • by Jordy ( 440 ) <{jordan} {at} {snocap.com}> on Friday November 02, 2001 @06:57PM (#2514575) Homepage
    VM load and system load are two very different things. You can have 250 processes blocked on a floppy disk read and run your load to 250, but try having a bunch of processes and the kernel compete for the last block of memory, especially networked apps where your network card driver all of a sudden needs contiguous blocks of memory in a heavily fragmented system and watch the difference between 2.2 and 2.4.

    One more thing to note is the VM != the scheduler. The scheduler is what hands out CPU time slices to programs and ensures your mp3 decoder doesn't skip if it's been using a lot of CPU for some period of time. The VM is what manages memory allocations and decides what to page out and page in to and from disk.

    Really, there should be very little difference between VMs unless you are in a low memory condition. Now there is some difference when you consider cached disk pages, but if you are just running a mp3 decoder, I don't think you are constantly re-executing it over and over and even if you were, as long as you aren't in a low memory situation, both VMs should do basically the same thing.
  • by tuxlove ( 316502 ) on Friday November 02, 2001 @07:09PM (#2514633)
    He notes in his commentary that the 2.2 kernel "felt faster" or something to that effect, while still performing much worse in actual numbers. This is probably the manifestation of the a well-known effect in the world of performance: responsiveness and throughput are often mutually exclusive.

    In other words, given fixed parameters, it's usually not the case that you can improve both responsiveness and throughput at once. If you don't change memory, CPU speed or I/O bandwidth, and your code is devoid of excess baggage which effectively reduces one of the above, it is almost a given that the two are a tradeoff. I've personally experienced this numerous times in my own performance work, and have read the research of others that corroborate it.

    Here are some really interesting fundamental examples. One company I worked at lived and died by disk performance benchmarks, in particular the Neal Nelson benchmark. This test ran multiple concurrent processes, each of which read/wrote its own file. The files were prebuilt to be as contiguous on disk as possible so that sequential I/O operations wouldn't cause disk seeks. By the nature of the test, though, seeking would happen a lot because you had N processes each reading/writing a different contiguous file. So, you lost the benefit of the contiguousness. Until, that is, we came up with a way of scheduling disk I/Os which, given a choice of many pending I/Os in a queue, favored starting I/Os which were close to where the disk head happened to be. This wasn't your father's elevator sort! The disk head would hover in one spot for extended periods, even going backwards if necessary to avoid long seeks. It was a bit more sophisticated than that, but those are the basics.

    The effect was, if a process started a series of sequential I/O operations, such as reading a file from beginning to end, no other process could get much of anything through until it was done. So what did this do to performance? Well throughput shot through the roof because disk seeks were nonexistent. The test performed beautifully, as it only measured throughput, and we consistently won the day. However, I/O latency for the processes that had to wait was extremely high, sometimes on the order of minutes.

    Needless to say, these "enhancements" were only useful for benchmarking, or perhaps for a filesystem on which the only thing running were batch processes of some kind. It would feel slow as molasses to actual human users, verging on unusable if anyone started pounding the disk. You can't wait 60 seconds for your editor to crank up a one-page file (well, okay, we didn't use MS office in those days :). On paper it was fast as hell, in practice it seemed very slow.

    One paper I read on the subject of process scheduling postulated that by increasing the max time slice of a process you could improve performance. The idea was that you would context switch less, would improve the benefits of the CPU cache, and so on. They increased the time slice to something above 5 seconds and ran some tests. Of course, the throughput improved by some nontrivial amount. Predictably, though, the system became unusable by actual human users for the same reason as in my disk test example.

    The other extreme would be absolute responsiveness, in which you spend all your time making people happy but not getting any real work done. An example of this would be "thrashing", where the kernel spends most of its time context switching and not actually running any one process for an appreciable amount of time.

    The sweet spot for the real world is somewhere inbetween, perhaps a little closer to the throughput side of the spectrum. It sounds like this may be the direction they've gone with the 2.4 kernel, though I'm sure they've done a lot of optimizing and rearchitecting to improve performance overall.
  • by Anonymous Coward on Friday November 02, 2001 @09:25PM (#2515083)
    Actually, that's exactly 16 EB.

Life is a whim of several billion cells to be you for a while.

Working...