Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Debate on Linux Virtual Memory Handling 330

xturnip sent us a good piece running over at Byte about Linux's VM. Somewhat more technical then the stuff we usually see online, this one talks about different VM systems, and the egos in the kernel. Its worth a read.
This discussion has been archived. No new comments can be posted.

Debate on Linux Virtual Memory Handling

Comments Filter:
  • His favorite? (Score:4, Interesting)

    by LinuxGeek8 ( 184023 ) on Tuesday October 30, 2001 @09:41AM (#2496946) Homepage
    He seems to think a lot in favor of the Andrea VM.
    That's ok to me, but he might want to take notice of the fact that linus didn't accept Rik's patches a lot and that 2.4.9 still had actually the VM of 2.4.5. The -ac tree was more up to date.
    So for a good comparison you'll need to compare the linus and the ac tree.

    • Which is what he's talking about.
      • Well, I was actually saying that if you compare 2.4.10 with 2.4.9, you're actually comparing 2.4.10 and 2.4.5.

        Even though the kernel had gradually evolved from 2.4.0 to 2.4.9, it was evident that the VM design was more of a liability than an advantage.

        Point is, the kernel did not gradually evolve to 2.4.9, but only to 2.4.5.
        Rik's VM has problems, but in the current ac tree it is doing quite well. Maybe as well or better then Andrea's VM.

        Anyway, let's hope that the best VM wins, if there is a best VM.
  • by imrdkl ( 302224 ) on Tuesday October 30, 2001 @09:42AM (#2496957) Homepage Journal
    From the article:

    Nobody has yet dared to speak of a Linux source fork, but this is dangerously close to one.

    Is this truly dangerous? If so, why? Why not let the 2 VM's compete and the users will decide?

    Better to split than stagnate.

    • Essentially that is already going on. The Andrea VM is in Linus's tree now and Rik's VM is still in Alan Cox's tree. So by choosing the official Linus kernel or the -ac kernel you can choose which VM subsystem you would rather use.
    • Better to split than stagnate.


      True, look at the success of the "Gnome vs. KDE" split.

      • A better analogy would be look at the success of emacs v. xemacs. iirc, that was a true fork.
      • by battjt ( 9342 ) on Tuesday October 30, 2001 @10:29AM (#2497186) Homepage
        Look at the success of EGCS and GCC. That was a successful split and merge. It led to a better GCC in the end while supporting both stable and advance versions of gcc in the interim.

        Joe
    • Why not let the 2 VM's compete and the users will decide?

      The problem is the duplication of effort and decreased manpower for each VM. Not only that, but any project that works closely with the VM has to test under twice as many conditions, and may require different code for each. Talk about a maintenance problem.

      It's certainly good to have competition to bring out the best in each system, but it would be horribly inefficient to keep it going in the long run.

      Regarding the users choosing - the users don't have the opportunity to choose only on the basis of the VM. It's not like they can apply the "VM patch" to the stock kernel to try out the other one, rather, they have to apply a fairly large -ac patch that changes a lot of unrelated things.

      • The problem is the duplication of effort and decreased manpower for each VM. Not only that, but any project that works closely with the VM has to test under twice as many conditions, and may require different code for each. Talk about a maintenance problem.

        And this would somehow not be the problem with a fork? Considering Linux vs *BSD is already a division of the pool of possibly alignable geeks, and considering both Linux and *BSD families continue to grow, innovate and expand, I think the problem is overrated.

        Organizations align on common goals and pursuits, by definition. If there were two or more unalignable goals in the VM, then either a fork or an unforked competition would be in order, and would have the same issues of reduced effort and increased maintenance chores.

        Personally, as a non-kernel developer, I think the different VM issues are probably overblown in the moment, and that the best approaches will forge ahead with some significant consensus in the mid-term. Until then, it's worth the experimentation it takes to decide what are the best approaches.

      • Microsoft (Score:2, Insightful)

        by fcd ( 89027 )
        It's certainly good to have competition to bring out the best in each system, but it would be horribly inefficient to keep it going in the long run.

        Isn't that basically Microsofts argument as to why its ok for them to be a monoply? That competition is not efficient in the software industry?

      • 'cause I have no intuition at all about these things:

        Roughly how much of the kernel/drivers/whatever that constitute the source tree we call the kernel has to know about virtual memory implementation? I would have thought that even the most low-level driver would interact with the VM system at the API level, so it wouldn't matter which VM implementation manages the pages as all the driver cares about is that this page is locked in physical RAM, while those pages can be moved as the VM sees fit.

        eh?
    • If they are to truly compete, then we should be allowed to choose between the Andrea VM code and the Rik VM code when we compile our beloved kernels.

      However, a kernel fork would not neccessarily(sp?) be a bad thing, as long as the forking doesn't break the ability to run binaries. I'd hate to have to recompile my entire system just switch between VM-s.
    • Why not let the 2 VM's compete and the users will decide?

      There was an interesting thread on this a while back, rooted at this comment [slashdot.org]. Unfortunately, the article's old enough that it's only available as a static page, and the oh-so-wonderful Slashdot code that generated the page seems to've done so with the comments in basically random order, so it's almost impossible to follow the thread. Maybe I'll try to recreate its original structure and put the result on my website.

      In brief, I think it's a great idea. Competition is good; let individuals and teams compete on the basis of the quality of their work, and bless the "winner" as part of the official tree. The "loser" is always free to try again in the next round. The only problem is that this all should have occurred in the 2.3.xx and/or 2.5.xx series; 2.4.xx should not be changing horses in midstream.

    • Suppose Linux did fork...

      If Linus's version is called linux, what is Alan's version called?
  • He alludes to some FreeBSD vs. Linux benchmarks at the end of the peice. Anyone got any links?
    • The old benchmark is here [byte.com], but as the poster above noted, the new benchmark is forthcoming.

      Although it will be comparing a moving target (Linux 2.4.x) to a moving target (FreeBSD 4.x), the results will be interesting. AFAIK, there weren't any major changes (I mean like VM changes :) in FreeBSD, so comparing the old and new benchmarks would give a good indication on how much Arcangeli's VM improves things.

      • Hmm, recently OpenBSD and FreeBSD (I'm not sure about NetBSD though) have added improved dirpref code (created by an OpenBSD developer(s)).

        When data is written with the new algorithm, subsequent reads and writes are on average faster (being conservative). People are seeing 6x improvements for certain tasks as well!

        So while there weren't any major changes to the VM in FreeBSD AFAIK as well, if the benchmark involves using any files on the disk, then it'll most likely be sped up...!

        Here's a link to the discussion on the FreeBSD-stable mailing list... [geocrawler.com]

        and another link... [geocrawler.com]

  • by rakarnik ( 180132 ) on Tuesday October 30, 2001 @09:56AM (#2497020) Homepage Journal

    Moshe Bar seems to indicate that Alan Cox is creating some kind of fork of the Linux kernel. Actually, -ac kernels are alwasys different from Linux kernels to some extent, since they include slightly more experimental code (e.g. ext3), or code that Linus has not had a chance to review yet. This way, the experimental code gets more testing before going into official Linus kernels. You can read more about -ac kernels [kernelnewbies.org] at KernelNewbies.Org.

    As anyone following LKML knows, Alan thinks that drastic VM changes should be reserved for 2.5, and so continues to keep Rik's VM going. This actually helps quite a bit as both VMs get tested and there have been several comparative tests conducted leading to improvements in both VMs. Competition in this case is certainly helping Linux.

    Oh and for all you fork conspirators, here's another fact: Andrea Arcangeli also releases his own kernel releases, called -aa. I don't think any of these are considered forks; everyone understands that this way pacthes get more testing, "crosstalk" between the different flavors is a given.

    Much ado about nothing, IMHO...

    -Rahul

    • by rakarnik ( 180132 ) on Tuesday October 30, 2001 @10:02AM (#2497053) Homepage Journal
      See this posting to LKML:

      Alans talking about switching VMs in -ac kernels [indiana.edu]
    • The difference between the aa kernel and the ac kernel is that Alan Cox is widely recognized as the number 2 Linux guy, but he's promoting something completely opposed to Linus' decision.

      If anyone could start a fork, it's Alan. However, remember that forks aren't necessarily bad. And there seems to be strong argument in favour of both VMs. . .
    • by bwt ( 68845 ) on Tuesday October 30, 2001 @10:51AM (#2497308)
      I don't think any of these are considered forks; everyone understands that this way pacthes get more testing, "crosstalk" between the different flavors is a given.

      Well, I disagree -- they are ALL forks. Any time you create a patch you are forking. The open source development model relies on perpetual fork and merge to accomplish its development. Most projects are forked this way into a development and a stable branch. I call this a "constructive fork". The AC kernels are perpetually different, but importantly they are generally about the same "distance" away, and "crosstalk" as you call it keeps it that way.

      As the "distance" increases, tension increases, and if it isn't resolved it will divide the development camp. If the crosstalk stops, and the idea of eventual merge is abandoned, you have a "true" fork. Developers have to pick sides, and the split can become permanent.

      I think the AC kernels have always been the former kind of constructive fork. If he never adopts the new VM, then his kernel will begins to diverge since developing for two VM's is hard. In this way, a small perterbation can become a full blown deviation that divides developer resources. I really doubt that the VM issue will divide the linux kernel team permanently. As AC's kernel gets farther away from the main line, the tension on everyone will increase. Eventually, I predict, the team will force one solution, but there is no guarantee.
    • I believe that Alan Cox prefers GNOME to KDE. Therefore I have a cunning plan for an Alan Cox Trap. I will set up a machine with a specially patched kernel to use Rik's VM when GNOME runs, and Andrea's VM when running KDE. Mwhahaha....
  • The article seems to come out in favour of the new VM code. It makes it sound like it works much more effectively. So, why does Alan Cox continue with the old VM code? There must be some reason why he thinks it's better, or why go through the effort of continually patching the old code into the newer kernel?
    • The article seems to come out in favour of the new VM code. It makes it sound like it works much more effectively. So, why does Alan Cox continue with the old VM code? There must be some reason why he thinks it's better, or why go through the effort of continually patching the old code into the newer kernel?


      Basicly because nether of them are good in all conditions. Each of them is better than the
      other in some situations. eg, big systems, little systems or whatever. While i am on the kernel mailing list i haven't been following the discussions closely enough to say any more than that, but it's the gist of it. Also for a while Alan continuing to run the Rik VM gave people a way to run a later version kernel without being lab rats for the new VM, which really hadn't had much testing in 2.4.10/11.

      I think that this article overrates the AA VM by a large margin. It cant really be said to have solved the linux VM woes, which is what it implies.



      I have now used both of the .13 kernels and personally found the -ac vm to be better for my needs. On the other hand, since i brought 768MB of RAM today, my needs have just changed.

    • i think it is mainly to do with stability and the proven ability of the old vm code. basically, the new vm code was a complete rewrite from scratch and was incorporated into the main kernel straight away. the problem therein is the fact that the code will not have been as throughly tested and proven as the old vm. it may well be that the new vm is rock solid but it hasnt been in use for as long as the old vm to prove it. what alan is doing is sticking with the old vm as it is pretty much proven to work well and not fall over.

      its not exactly trivial to rewrite an entire vm so there are bound to be problems with it. these problems come out through testing. i would have thought such a major rewrite would have been put in a development kernel first rather than into a "stable" kernel tree. that way, developers can test it first and iron out any problems rather than everyone upgrading to the new vm _then_ a major problem found.

      the new vm may be brilliant and fast, but alan has a point in sticking with the old code. major rewrites should belong in development trees until fully ready for a stable release.
  • swap space? (Score:3, Interesting)

    by archen ( 447353 ) on Tuesday October 30, 2001 @10:02AM (#2497057)
    From the article - " All earlier 2.4 kernels (since 2.3.12) needed at least the same amount of RAM in swap and then more to give you additional virtual memory. This meant that on an 8-GB server, you needed to put aside almost a full 9-GB disk just to be able to swap"

    Is this accurate? For just about everything I've always gone with 512Mb of swap, regardless of whether I had more or less RAM (not that I'm technically proficient or anything). This would also be a shortcoming of Linux since it would make it a pain in the ass upgrading RAM if you needed to allocate more swap space somewhere else each time. Well I'm all for the newer VM. Simple is good.
    • In the traditional VM formulation (pre-linux), every bit of VM would have a place in swap, and RAM would just keep a fast copy of the data. This greatly simplified the implementation, of course, because all of the data had a location on disk it could keep for its entire lifetime. Linux didn't do this: data would just be put on disk somewhere free, and could lose its place while in RAM. This meant that your total VM would be disk+RAM, not just disk.

      As an optimization, and due to hard drive space being cheap, the first 2.4 VM used the traditional scheme, because, if a page hasn't been modified since it was last swapped out, it wouldn't have to be written at all if it was still there.

      In any case, it's probably worthwhile to have at least 1.5 times the swap as RAM; if you have just a little bit of swap on a high-memory system, it's unlikely to save you from running out of memory, and will instead cause the machine to swap a lot before running out of memory anyway. You don't need to upgrade memory and swap at the same time, but you might as well upgrade swap first, or just turn it off. (That is, if you're getting more memory so you'll have more space, buy swap first. If you're getting more memory so it will be faster, replace swap with RAM)

      These days, hard drives are cheap, and there are old hard drives lying around of reasonable sizes; just use a whole recently-replaced hard drive as swap. This avoids contention with filesystems and is easy to replace.
  • Against the Truth (Score:2, Insightful)

    by Anonymous Coward
    Moshe Bar argues two points I vehemently disagree with:

    (1) Alan made a mistake in not switching to Andrea's VM. Alan is trying to maintain a stable kernel. Switching out large chunks of the VM is the last thing to do to achieve those goals. Alan will switch in due time.

    (2) The preemptible kernel is unfit for certain scenarios. Everyone I know loves the preemptible kernel. It gets good reports on lkml and the kernel news sites - Hell, it even got good comments here!

    I realize this is an editorial, and I understand everyone has an opinion, but if it isn't true it isn't true. An opinion can't contradict fact.

    Tim
    • I'll skip point 1. I agree with your assessment and as others have pointed out the switch will be made.

      On point 2 however, I just don't agree with you. Moshe does more than a adequate job of explaining his stance on this issue. Between pointinging out the costs of making the kernel fully preemptible, citing his experiences with using it on personal machines (good) and servers (not so good), then noting the preemtible kernel breaks Mosix and LIDS I think he's got a right to his opinion. It's based upon at least as much fact as stating everyone loves the preemptible kernel.

  • ok, here's the thing (Score:5, Interesting)

    by Velex ( 120469 ) on Tuesday October 30, 2001 @10:26AM (#2497165) Journal

    I don't care if you want to swear by the Linus kernel, but it gets killed by IO. I mean, come on, I'm using 2.4.12, and I can't rip a CD an play an MP3. Under the AC series, I can rip CDs, play MP3s, watch divx movies, surf the web, untar a file, and have a compile job going at the same time. Even for more usual setups, like viewing a video without doing anything else, the Linus kernel drops frames left and right, whereas the AC series laughs at it. Don't tell me I need to use mplayer with SDL, because I do.

    Because I treat my Linux box as though it were a Windows box (one of the reason I switched over to Linux for everything is that the widgets in GTK are prettier than the widgets in Windows -- it's nice to have people ask me how to get their desktops to look like mine and tell them they have to install linux) and I expect it run at least as well as a Windows machine, I must use the AC series. While I'm sure that the Linus kernel has it's applications, it is simply unacceptable for replacing the Windows kernel.

    Mod me flamebait or troll if you want, but I speak the truth. I have a Thunderbird-750 with 224 MB of ram, and I find it simply unacceptable when I can't run Quake or view movies under linux because of the Linus kernel. When mp3s skip because I'm moving some data around, it tells me that something is wrong with the Linus kernel. I'm glad that I had a friend who introduced me to the AC series, or I would have given up on linux. Plain and simple, politics aside, the end user doesn't care that he's being loyal to Linus the Great, he just cares that he can view that movie. If Windows outperforms linux in multimedia, he'll use Windows.

    • by choward ( 307122 ) on Tuesday October 30, 2001 @01:28PM (#2498354) Homepage
      I use the Linus stock kernel on a _very_ similar setup (Duron 700, 384MB ram) and I don't have the problem you mention. One thing I've noticed is that with the Linus kernel, DMA is _never_ turned on by default, you must use hdparm explicitly at startup. Once you do that, skipping mp3s are a thing of the past.

      Running the hdparm tests,
      w/out DMA: 4.01MB/sec
      with DMA: 34.96MB/sec

      Quite a change.

      Craig Howard
      • Hmmm - an AMD eh? Would you by chance be using a VIA chipset?

        I'll have to admit that I don't keep up on the LKML like I should, but I've been afraid to use DMA on 2.4 because of the file system corruption it was experiencing with VIA chipsets early on. [I have a PIII with a VIA Apollo Pro 133A chipset.] Wasn't that the reason that Linus decided to default to "DMA disabled".

        Did that ever get resolved or should I still be wary of DMA?

    • by derF024 ( 36585 )
      i don't know what you're doing to that poor machine, but i have a 366 mhz laptop with 128 megs of ram and i can do all of those things under linus' 2.4.12 just fine. I play video under avifile instead of mplayer, but I never lose a frame, even across 10 mbit ethernet.
  • I have a large ext3 partition to store all my data and a 256MB partition of swap. I also have 384MB of RAM. Occasionally I'll hear the had drive grinding away like it's using the swap. I check and it is using the swap, but my real RAM isn't full yet. It's actually far from full, like 100MB free. I know if I don't have a swap partition it wont use it, but then I'll run out of memory sometimes, like when I have a huge pile of applications open at once. It really needs some work. I don't care about forks or anything, just make it work better.
  • Does it support malloc correctly now (returning NULL when out of memory)?
    • IMHO this is useless, it wastes a lot of memory and makes programs fail sooner (pages that are never written to or are immediately freed due to exec would take up space). More importantly this solves nothing, as memory does not necessarily run out when *your* process calls malloc, it runs out when *some* process calls malloc. Maybe in the old days when system calls did not require memory to be allocated this scheme would have worked but not anymore. I have no idea what a real solution is, but I kind of suspect nobody does...
  • Compound errors (Score:5, Interesting)

    by Salamander ( 33735 ) <`jeff' `at' `pl.atyp.us'> on Tuesday October 30, 2001 @11:05AM (#2497387) Homepage Journal

    IMO both Rik's code (RVM) and Andrea's (AVM) were accepted prematurely, and Linus's ADD is the root of the problem here. Everyone thought the 2.2 VM was broken, so he jumped on RVM when it really hadn't received adequate testing with various workloads. Then, when that didn't work out, he did something even worse by jumping on AVM in the middle of a "stable" kernel series when it was totally undocumented and even less thoroughly tested than RVM. That's just bad software engineering, regardless of the quality of Rik's or Andrea's work.

    Ideally, an "old-fashioned" alternative to RVM would have been maintained throughout the 2.3 process, as a fallback in case RVM turned out not to be ready for 2.4 - which was in fact the case. But this wasn't done, there was no alternative, and so RVM became the basis for 2.4. Once that decision was made it should not have been unmade by replacing RVM with AVM. Andrea's work should have been in the 2.5 tree, which should have been opened a long time ago to deal with precisely this sort of situation. 2.4 is not the last Linux kernel that will ever exist. We don't need to make it perfect. It would be far better to admit its imperfections, band-aid them as best we can, and try to get a head start on creating something better for 2.6. What we have instead is error on top of error, "not ready" replaced with "even less ready".

    To clarify, I have nothing but the highest regard for both Rik's and Andrea's work. Obviously they have different ideas and attitudes. Rik has drawn on many sources in his design, resulting in a system that is both very advanced and very complicated. The process of reining in the complexity is still incomplete, but I still have hope that some day Rik will be able to come up with something that's really awesome, and he has always documented his ideas thoroughly. Andrea, by contrast, is much more pragmatic; he wants something that works now even if it's somewhat more limited in scope (e.g. by being almost impossible to reconcile with NUMA). The dark side of that "pragmatism" is that Andrea has skimped on non-code activities such as documenting or explaining the basic ideas on which his system is based. Nonetheless, both have done great work and should continue to do great work...in the 2.5 tree.

    • Where is the moderation fairy? Why does she only give me dust when there are lame stories to moderate. Please sprinkle thy dust into my hands so that I may bless this post.

      JOhn
    • Re:Compound errors (Score:4, Informative)

      by puetzk ( 98046 ) on Tuesday October 30, 2001 @01:36PM (#2498395) Homepage
      FWIW, I think that Andrea's setup is modeled after the 2.2 VM (which he did a fair amount of work on tuning). So this is really more of a pragmatic revive-the-old-approach than it might initially seem.

      We all know this simplistic setup had scalability problems (like much of 2.2) but at least it worked right. Hopefully given some more time, Rik can really get his to go, since it seems more sophisticated/scalable long-term.
      • FWIW, I think that Andrea's setup is modeled after the 2.2 VM

        Yes, mostly. Except for the classzone stuff.

        We all know this simplistic setup had scalability problems (like much of 2.2) but at least it worked right. Hopefully given some more time, Rik can really get his to go, since it seems more sophisticated/scalable long-term.

        Absolutely. I'd love to see Rik and Andrea (and others?) competing on purely technical merit, in the 2.5 tree. I think that would be great for everyone.


    • Hey, Linus f***ed up with accepting the original RVM for 2.4. And now he was between a rock and a hard place. "Marketing" considerations meant that 2.4 needs to be "non-experimental"; that the client base could go to the 2.4 version, and use it with little concern that their server would crash. That would not have been the case with RVM versions up the last month.

      So what was Linus to do? Keep dragging out 2.4 until RVM could fulfill minimum "marketing" requirements of stability? How long is that going to take? Do you want to wait and let M$ marketers talk about how amateurish Linux was; that professionals did not use Linux's current "stable" release, but a version that hasn't improved in 3 years?

      So Linus decides to commit A REALLY BAD PRACTICE, and changes to a less tested VM over the initial 2.4 VM. Its another f**kup if AVM is buggier than RVM. (But Linus had reason to believe it wouldn't be so, with relatively limited testing.)

      Is it still an f***kup even if AVM turns out to be more stable than RVM? If so, are you saying its preferable for new Linux development to be shutdown for another 6 months to a year? And Zdnet to opine on how the "stable" 2.4 kernel is DEMONSTRABLY unreliable? I'll take a "manager" that makes mistakes and makes decisions based on product survival over a manager that religiously follows an engineering practices manual.
      • Re:Compound errors (Score:3, Insightful)

        by Salamander ( 33735 )
        are you saying its preferable for new Linux development to be shutdown for another 6 months to a year?

        No, there have been quite enough delays associated with 2.3/2.4 already. More than enough. And there will continue to be delays until the processes get ironed out.

        What would have been preferable, IMO, would have been if more resources had been devoted to fixing and tuning the VM we already had (RVM, for good or ill). Linus could have put his foot down. He could have said "There will be no 2.4 VM except for RVM. The price for admission to the next round of VM redesign is that you help us fix RVM." People - notably Andrea - would have listened, and contributed more constructively. They know that Linus's good will is like currency. But Linus didn't say that. Alan Cox pretty much has, and kudos to him for having the courage to do so. What Linus did was take a bad situation and act in a way that nine times out of ten would make it worse. Maybe he'll get away with it this time because AVM in its current state is more robust than RVM in its current state, but that would actually be a bad thing because it will only reinforce the bad decision-making and we'll get burned next time instead of this time.

        And Zdnet to opine on how the "stable" 2.4 kernel is DEMONSTRABLY unreliable?

        First off, are good reviews from places like ZDnet the goal for Linux development? Second, do you think it's better for the stable 2.4 kernel to be subtly, unpredictably unreliable? Better the devil you know, and all that.

        Most importantly, what if Linus's gamble - and that's what it was - hadn't succeeded? What would the ZDnet reviews be like then? What kind of ammo would that provide for everyone who wanted to claim that open-source development processes weren't all they're cracked up to be? Yeah, it looks (so far, knock wood) like we've been lucky this time, but I don't think relying on luck is a good thing.

        I'll take a "manager" that makes mistakes and makes decisions based on product survival over a manager that religiously follows an engineering practices manual.

        The two aren't as diametrically opposed as you make them out to be. Good engineering practices are good because they help increase either the speed or the reliability with which product can be delivered. Slavish adherence to any dogma is a bad thing, but so is the belief that everything you're doing is OK just because you managed to win one game of chicken. My point is that this scenario is going to be repeated. I'd rather encourage responsible driving than watch what happens when Linus plays one game of chicken too many and brings everyone else along for the ride.

  • Upon returning home the other week after meeting with Andrea, I went to my lab and searched for the disk images of the server comparison I ran back in January of this year (of FreeBSD 4.1.1 versus Linux 2.4.0). I took the Compaq ML500 server I have been reviewing (2x 1-GHz CPUs, 2-GB RAM) and upgraded both the FreeBSD disk image to 4.4-Stable and the Linux version to 2.4.12.

    Good, this would be an interesting benchmark.

    Then, I changed the memory down to 192-MB RAM so as to stress the VM system more.

    ok, this is fair, but you should also run with the same memory configuration you originally ran.

    I also upgraded to the latest stable versions of Sendmail (8.12.1) and MySQL (version 3.23.42). Finally, I compiled everything with the latest version of gcc, 3.0.2, and tuned the two instances to the best of my knowledge (softupdates and increased maxusers for FreeBSD, and untouched default values for Linux).

    NO!!!! why would you do this? Don't you want to know how the earlier linux/FreeBSD kernel compares to a later ones? Now instead of modifying one variable you've modified 3,846 variables. It's going to see if any improvements in FreeBSD/Linux are due to an updated kernel, compiler, mysql, etc etc. Go back to your original setup and only change the kernel, since I believe that's what you want to benchmark.

  • Good article, but what can I do, the end user, to get around some memory management and scheduling issues? I don't know nearly enough to do any kernel tweaking and tailoring for my processor architecture and hardware configuration. Is it unwise to run a desktop system without swap space? I was thinking of giving the performance of my system a boost by leveraging the low RAM prices and eliminating the swap partition. Other /.'s and I have experienced swap space usage during times when some physical RAM was free. Is the kernel thinking "Well, these pages are *so* old and inactive enough that I'll just stick them in swap, regardless of the current system state"? It sounds like paragraphs 16 and 17 attempts to explain this but it flies over my head. Since you can buy 1GB of EEC PC133 RAM for about $150, I was thinking of buying 3GB of RAM, investing in some type of power backup and running my entire system in RAM. Can you do this?

    Despite this major issue, a Linux based system is still more stable and in most cases faster than Windows 2000. Also, like the article mentions, take into account that Linux runs many different types of processors. Linux on SPARC is good and 21264 Alpha performance is mind-blowing. Keep up the good work.

  • Why VM is bad (Score:4, Interesting)

    by Animats ( 122034 ) on Tuesday October 30, 2001 @12:36PM (#2497982) Homepage
    Virtual memory is way overrated, and probably should be phased out, both on servers and desktops.

    In Peter Denning's classic paper, The Working Set Model of Program Behavior [nec.com], Denning concluded that paged virtual memory was, at best, good for an effective 2X increase in memory size. When he wrote that paper in 1968, memory cost about a million dollars a megabyte, so a 2X increase was worth the headaches of a VM system. Today, with memory at a few hundred dollars a gigabyte, it looks less attractive. It's not that expensive to double the size of RAM today. It can be cheaper than adding a fast disk drive just for paging. Uses less power, too.

    Disk as backing store gets worse as RAM gets faster. When Denning wrote that paper, the fastest backing devices (drums) rotated at around 10,000 RPM, for a 6,000 microsecond access time, and core memory cycle times were around 4us. So main memory was 1,500 times faster than backing store. Today, RAM cycle times have dropped to around 0.020us, but disks still top out around 10,000 RPM, making main memory 300,000 times faster than backing store. Thus, the relative cost of a page fault has increased by a factor of 200. This makes VM far less attractive today than it used to be. It's not getting any better, either.

    The price of having virtual memory is terrible performance once paging between active processes starts. That's called "thrashing". On a server which is processing short transactions, you're much better off throttling at the transaction launch point (as, for example, where CGI programs launch) than going into thrashing. This requires some coordination between applications and memory allocation, but where most of the memory is used by Apache and its child processes, that's a viable option.

    The main value of VM today is getting rid of dead code at run-time. A basic problem with shared libraries is that you load in the whole library, needed or not, when you need any function from it. This wastes memory, but after a while, the VM system will notice the unused pages and quietly release them. On a larger scale, the same problem is seen with dormant applications, a problem which has gotten totally out of hand in the Windows world, where far too much unwanted stuff launches at startup. VM ejects them from memory. That's what VM is really used for today.

    So if you're actually page-faulting, VM is hurting, not helping.

    I'd argue that it's time to go back to a swapping model - all of an app has to be in before it runs. That's where UNIX started; virtual memory didn't come in until 4.1BSD. But in support of this, apps need more information about the current memory situation. And they should be able to designate parts of their space as pageable, at least at the shared object/DLL level. Only a few apps (web servers, window managers) need much memory awareness, so that's feasible. Throttling needs to occur at a smart place, just before allocating substantial resources, such as CGI process launch or connection opening. By the time the VM system becomes involved, it's too late; resources are already overcommitted.

    The big win from this is repeatable latency at the memory level. With all the interest in reducing kernel latency at the CPU level, it's time to address it at the memory level too.

    QNX, the real-time OS, is worth looking at in this regard.

    • but an unused shared library page will be dropped, not swapped (as it will be clean). In fact, it will likely never be read into memory at all, just mapped.

      So it seems that you use the two concepts of swapping and mapping as if they were one, when they are not. It is perfectly feasible to have a system with VM but no swap. This system will of course die at unpredicable times because of [temporary] overcommit that you could have survived with a swap file.

      I can't see how you could have a system with swap but no VM, tho. Arguably, this is what overlays and banks do for you.
    • Re:Why VM is bad (Score:4, Informative)

      by DaveWood ( 101146 ) on Tuesday October 30, 2001 @02:19PM (#2498679) Homepage
      Alright, I'll bite. What you say is interesting, and I believe your comments regarding the changing relative costs of traditional VM paging algorithms make sense. The problem is that I suppose I don't understand the alternative you are proposing. I am certain this is due to my own ignorance; please give tolerance to my questions, and don't let my inquisitiveness be mistaken for criticism.

      You say, "The price of having virtual memory is terrible performance once paging between active processes starts." Assuming the VM algorithm is working correctly (big assumption lately), this means basically that you are trying to run more than your memory can handle, and have reached a load-shearing point with respect to RAM. From this I surmise that we might be talking about a "smarter" VM system that would shear better, perhaps by identifying the condition, and perhaps by better communication with higher levels - in other words, a different/better application-level interface to the VM system.

      And, indeed you say, "On a server which is processing short transactions, you're much better off throttling at the transaction launch point [than thrashing]... This requires some coordination between applications and memory allocation." So I think I understand so far.

      Then you say: "A basic problem with shared libraries is that you load in the whole library, needed or not, when you need any function from it." This is where I perhaps display my ignorance of the kernel, but that's not what I have understood was going on. My impression of things was that an application was loaded into memory by mapping its data on the disk into "virtual" memory, and that the VM subsystem arbitrated between real and virtual memory by retrieving from the disk only what blocks were "necessary" (i.e. being referenced by the executing code), and that this process naturally extended to libraries, and especially shared libraries (which need only exist in "real" memory in one location, despite being mapped into multiple "virtual" memory environments). Then again, perhaps it is a minor point - if the whole SO image is loaded and then unused pieces are unloaded or vice versa, it seems less important than the contention problem already on my mind...

      You say "VM ejects [unused bits of libraries and applications] from memory. That's what VM is really used for today." Absolutely! But regardless of the relative differences, isn't this process of migrating data between different "tiers" of data storage in the computer (each with a different latency, throughput, and cost/availability) always going to be necessary? While I can certainly see a major advantage in creating/improving ways for the application to communicate with the memory management system, is there really some fundamental alternative to the block-based VM "guesswork" that takes place in absence of directives set at compile time?

      You say: "So if you're actually page-faulting, VM is hurting, not helping." I am wondering if the VM is either hurting or helping per se, since the real problem is that you don't have enough RAM even for the "active" blocks you want to run. Of course, the quality of your VM will determine how close you can get to "perfect" utilization of your RAM.

      Then you say, "I'd argue that it's time to go back to a swapping model - all of an app has to be in before it runs." This is where you lose me, I suspect because I do not understand what you are really proposing. You go on to say "in support of this, apps need more information about the current memory situation. And they should be able to designate parts of their space as pageable, at least at the shared object/DLL level. Only a few apps (web servers, window managers) need much memory awareness, so that's feasible.Throttling needs to occur at a smart place, just before allocating substantial resources, such as CGI process launch or connection opening. By the time the VM system becomes involved, it's too late; resources are already overcommitted."

      At first it sounds as though you are saying that you want to eliminate swap altogether. I do not doubt that for some situations this is preferable - you want to have consistent performance and a sharp failure rather than the long thrash in the case where you use up your resources (and you mention QNX). However for general-purpose computing, I'm not so sure this is a good idea, even with RAM as cheap as it is. Depending on what you're trying to do, the slight loss in predictability and overall performance is vastly preferable to sharp failures for many, I would even say, "most" applications, even on the server.

      But moving on, it seems you are saying that what you dislike about the VM is that data is broken into arbitrary blocks - and so we should rely on application programmers to designate what it would be a good idea to swap out in case of memory contention ("designat[ing] parts of their space as pageable"). The problem I see with this is that you are relying on the programmer to do something that, if they do not do it, their program will appear to run anyway.

      This is therefore automatically classified a frivolous expense by commercial software developers, and even OS people working for the love of the game may be tempted into the same pitfall. This is superficially similar to the argument between malloc/free proponents and garbage collector advocates. Giving the programmer another "lower-level" thing to worry about gives them an opportunity to optimize it, but in practice we often find that on the balance we get more mistakes and the quality of the user experience suffers.

      The compiler probably could be coaxed to do it for you. But the various tradeoffs between compile time "pre-blocking" and runtime blocking might leave compile-time computations, whether in the compiler or even in the developer's head, looking inferior to what a good VM system can do while observing actual behavior in real-time.

      Your point about throttling occuring "at a smart place" is not lost - obviously many applications could benefit from more transparency by the memory management system in managing their affairs - apache users really don't want to have to guess how many processes/concurrent users should be allowed, they want apache to determine it for them based on what the system can handle. But most application programmers are not going to do this extra work or do it right, and a VM seems like what you need as a "default behavior," even if its benefits (and its audience - those who have enough RAM that they never need fear swap) are lessening over time.
    • Re:Why VM is bad (Score:5, Insightful)

      by RelliK ( 4466 ) on Tuesday October 30, 2001 @02:46PM (#2498869)
      huh? what? This is the most uninformed garbage I have ever read. I don't have time to refute all of the nonsense, so I'll just take on the biggies.

      The price of having virtual memory is terrible performance once paging between active processes starts.

      When that happens, you are running a lot more processes that can fit into memory. Without VM you would not be able to do that at all.

      A basic problem with shared libraries is that you load in the whole library, needed or not, when you need any function from it.

      False. Any decent VM does demand paging. Only the pages that are needed are loaded from the executable. The parts of the program that are never executed are never loaded from disk, notwithstanding read-ahead optimization. A shared library is just an extention of the executable so the same rules apply. Further, a shared library can be used by multiple processes and only *one* copy of it is loaded into memory.

      I'd argue that it's time to go back to a swapping model - all of an app has to be in before it runs.

      That would be absolutely stupid. It would slow down the system tremedously. Se above about demand paging.

      Without VM, you would need to increase the memory requirements by a factor of N, where N is the number of processes running concurrently. Further, the startup time of each process would always be slower since all of the code would have to be read in memory. With VM part of it is already there (shared libraries), and the code is loaded on demand.

      In short, this is the biggest pile of uninformed garbage. You *really* need to take an OS course before you can talk about OS design.

      • Answers to the above (Score:3, Interesting)

        by Animats ( 122034 )
        False. Any decent VM does demand paging. Only the pages that are needed are loaded from the executable.

        If you implement a VM that way, launching a program takes a very long time. You could, in theory, start out with nothing in memory and page-fault the program in. This requires one disk access per active memory page until enough is loaded for the program to run. The very first virtual memory system, for the Burroughs 5500, worked that way. It worked OK for batch programs, in an era when batch programs ran for minutes or hours, but was terrible for interactive work.

        Most operating systems today load most or all of a program at startup, let the app run for a while, then release the unreferenced pages. Deciding how much to load at startup is an interesting question. The BSD UNIX guess was the first N bytes of the executable, where N is a system tuning parameter. (What, exactly, does Linux do about this?) This is a mediocre guess, but an easy one to make. It's OK for long-running programs, but terrible for short-lived ones. Short-lived programs don't run long enough for the least-recently-used page info to become useful. If paging occurs in this situation, the pages removed are ill-chosen, since the LRU info isn't useful until the program has run for a while.

        Much of the memory-demanding things servers do look like short-lived programs. CGI programs and Java servlets are short-lived programs. So they're a bad case for a VM environment. If memory gets tight enough that short-lived programs get paged out, thrashing is almost inevitable.

        You don't want to page out at all on a server, except (maybe) under transient overload. As soon as paging activity starts, it's time to throttle back the amount of server concurrency until paging stops. This requires coordination between OS and application of a kind not usually seen in the UNIX world, though mainframe transaction systems have had it for decades, all the way back to CICS.

        Desktop systems have a different set of issues, but they don't look like classic time-sharing systems either. My main point here is that in the last decade, the memory usage behavior for most programs has changed considerably, but we're still using virtual memory concepts that were developed in the 1960 and mature by 1980.

        And remember, even when everything works right, you get the effect of at best 2X the memory.

        Here's a basic tutorial on VM, with emphasis on Linux. [surriel.com]

    • I don't agree with your point about removing virtual memory from the operating system, but I am surprised that more people using Linux in a server role just don't buy enough RAM to ensure that the minimum amount of swap is actually required.

      Desktop use is a different matter - many people who use Linux on the desktop don't have a lot of money to spend on extra hardware (including RAM), and Linux should be able to run decently on as wide a range of hardware as possible (including cheap, old machines).

      But for server use, if you can afford the price differential of SCSI over IDE (which you should, for any server supporting any serious load) you should also be able to afford enough RAM to deal with most loading situations.

    • Re:Why VM is bad (Score:2, Insightful)

      by Johnno74 ( 252399 )
      No, sorry but I dissagree.

      A clever VM system is more important than ever, when you combine it with an effective disk cache.

      Yes, Ram is cheaper and faster than ever, but stopping your OS from using a swap file/partition is gonna stop your OS from efficiently using your ram.

      Your machine should be allocating as much memory as possibe to a disk cache, even if this reduces the available memory to the extent that active processes start paging, because the swap file paging is optimised by the disk cache too.

      It is usually better to swap out pages and keep your cache large than to keep rarely used pages in memory at the expense of cache, because even if you need those pages they will quite likely be in the cache.

      Even if you have a shedload of memory (especially so!) you will get better use of your memory if you use some of it as a disk cache than if you don't page rarely-used pages to disk.
  • by TheGratefulNet ( 143330 ) on Tuesday October 30, 2001 @12:46PM (#2498045)
    as a 5+yr linux vet, I'm horrified at this turn of events. I've always counted on linux to be rock stable, yet the last few months have been anything BUT stable.

    I really hate to say this, but I'm wondering if jumping ship to freebsd (etc) makes sense. I've been a major linux supporter for quite a long time, but I know that the *bsd guys have had their act together (good smp, good networking under load, etc) for a long time.

    would it be all that crazy to adopt the VM system from the 'establishment' (bsd)? frequently the linux codebase DOES borrow from bsd. why is the VM system all that different?

    • I really hate to say this, but I'm wondering if jumping ship to freebsd (etc) makes sense. I've been a major linux supporter for quite a long time, but I know that the *bsd guys have had their act together (good smp, good networking under load, etc) for a long time.

      *BSD has poorer SMP than Linux. Networking is arguable (some benchmarks show Linux is faster, some don't).

      *BSD has also had more than it's own fair share of infighting. Do people forget so quickly why we have *BSD instead of 386BSD?

      Go use *BSD if you want to, but do so for the right reasons, not because of rumor and myth.

      • BSD has poorer SMP than Linux

        not to start a flame war, but things like netbsd have had to be VERY smart about smp (and multiprocessing, in general) for a long time, simply due to the fact that they deal with much more hardware types than we (linux) do.

        freebsd is known by most to be more stable at high network load. hell, its got some 15yrs or so on linux, it has to be more mature simply due to all the time-tested code.

        and don't even get me started on nfs. nfs on *bsd has a hope of working. I'd not try nfs on linux unless I like hangs, stale handles and general unpleasantness.

      • I've noticed a recent trend towards trashing FreeBSD's SMP because of "the giant spinlock." What people don't realize is that one large spinlock can be a viable method of locking for the purposes of threading (that is, multiprocessing). It would seem that someone who has a moderate clue about threading and writing SMP-capable operating systems has commented on this, and feels it's bogus, and one or more of the general breed of "BSD is ubersux" trolls has gotten a hold of this and thinks it's the ultimate death knell for FreeBSD/smp. Obviously, you don't really know much about locking at all. It should at least be pointed out that no matter how many locks you have, it is more important to keep the system OUT of a locked state as much as possible, and FreeBSD does this well enough. It's not as if the system is constantly locked and able to use only one CPU. Most processing occurs in userland, far away from kernel locks, so it doesn't tend to matter all that much. Now, granted, using one spinlock isn't necessarily the best way to do things, at least not in an OS. However, it's not the worst either. Combined with the fact that it allowed fairly rapid updating and deployment of FreeBSD/SMP, I think the choice to use that 'giant spinlock' was valid. It allowed SMP code that by all accounts worked better at least than the 2.0 Linux kernel's (if not 2.2 as well) to be deployed until a better solution could be created.

        A better solution will be deployed in FreeBSD 5.0 with the introduction of SMPng. I do not doubt that the 2.4 Linux kernel does a better job at SMP than FreeBSD (release/stable) does, but I think it's worth noting that Linux's SMP has been now five or six years in the making to get to this point, and that the Linux and FreeBSD development and advancement models are significantly different. Where Linux takes gradual steps, FreeBSD (and BSDs in general) tend to take large leaps. That's just a difference in implementation timing. Furthermore, it's perfectly reasonable to expect two open-source systems to leapfrog each other in terms of capability as ideas and code move from one to the other, and it's really not something to gloat over. What one does better today, the other will do better tomorrow. It doesn't really matter. To those of you babbling on and on about 'the giant spinlock', you might want to go do some research into the theory, and practice, of implementing locks in threaded systems. Until then, shut up, please.

  • 2.4.12 has bugs that prevent certain compile options. 2.4.13 is out and it works really sweet.

    Oh an djust becuase there are two different vm's (Alan vs Linus) does not mean that there is an official fork. As well there is already companies like Redhat and Suse that release their own patches to the linux kernel that make them unpatchabe against the main tree.

  • by Sara Chan ( 138144 ) on Tuesday October 30, 2001 @01:16PM (#2498266)
    The discussion so far has focussed mainly on Rik's and Andrea's VMs. For the 2.4.x series, that's fair. For 2.5, though, what about considering the AIX VM?


    IBM has said that they will open source any part of AIX that we would like. The AIX VM works well under high stress. Obviously it could not just be put as-is into Linux, but there must be a lot of good ideas/algorithms in it that could--arguably should--be moved to Linux. Why isn't anyone looking at doing this?

    • by Anonymous Coward
      The AIX VM is bizarre and different - it is almost entirely unlike any other UNIX VM out there. It is hideously ugly, MP support was added as an afterthought and looks like it was written by a pascal programmer.

      It also relies heavily on a segment registers based architecture i.e Power/PowerPC (each segment describing 256MB chunks of virtual address space). You start getting into lots of fun when hitting/crossing segment boundaries.

      I have some doubts how well this maps to/performs on non-segment based architectures. IBMs inability/unwillingness to put an AIX product out on the Itanium after some heavy investment *may* be related.
    • Moreso than AIX, people have been reading the BSD VM code for a long time. It seems to be regarded very highly, and its design has been stable for quite some time.

      So why doesn't Linux just copy BSD?

      The code here seems rather incidental, it's the design that is more important. But why not copy a good design? Or do one (or both) of the contending VMs do so?

  • This is a link to the kernel-traffic discussion with details and basic benchmarks: here! [zork.net].

  • Core design issues. (Score:2, Interesting)

    by bored ( 40072 )

    Part of the problem with the design and redesign of the linux VM is an insistance with sticking with a few core design points that make it 100x harder to write. For instance, virtual memory overcommit spawns a whole bunch of ugly problems that must be solved in order to create a stable and fast system. If the core development team spent some time looking at past OS research then they would completly change their design criteria and a bunch of these problems would go away.


    Another perfect example is the OOM killer. If the VMM could properly balance the workload (and it didn't overcommit) then there wouldn't be a need for code to select the 'correct' process to kill. Since the VM cannot balance correctly, the kernel developers spend massive amounts of time trying to write an OOM that functions correctly in the case where the VMM is wedged. This time would be better spent fixing the VMM so it never got into these states.

  • They mention LIDS (Linux Intrustion Detection System).

    My question is..
    does the LIDS actually do *any* intrustion detection, or does it just prevent modification of certain files?

Things are not as simple as they seems at first. - Edward Thorp

Working...