Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Tuning Linux VM swapping 324

Lank writes "Kernel developers started discussing the pros and cons of swapping to disk on the Linux Kernel mailing list. KernelTrap has coverage of the story on their homepage. Andrew Morton comments, 'My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You really don't want hundreds of megabytes of BloatyApp's untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful.' Personally, I just try to keep my memory usage below the physical memory in my machine, but I guess that's not always possible..."
This discussion has been archived. No new comments can be posted.

Tuning Linux VM swapping

Comments Filter:
  • Ob. /. joke (Score:2, Funny)

    by Anonymous Coward
    First swap!
  • God no... (Score:5, Interesting)

    by 0123456 ( 636235 ) on Friday April 30, 2004 @09:32AM (#9017815)
    "You really don't want hundreds of megabytes of BloatyApp's untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful."

    I absolutely despise the way that XP swaps out applications in order to make the disk cache larger. I have 1GB of RAM on my machine precisely so I don't have to wait two minutes for it to swap my web browser back in after it's swapped out... yet if I copy a 2GB file from one drive to another, the stupid operating system will swap out all the applications it can just to make the cache larger.

    Please, please, don't take Linux down the same braindead route as Microsoft has done for XP. It's utterly insane to swap out my browser so that a 2GB file can be copied two seconds faster when I then have to wait two minutes for the browser to swap back in. Or at least provide some kind of '#define STOP_VM_SWAPPING_STUPIDITY' so that I can disable it.
    • I agree. My box has 768 megs of RAM specifically to minimize swapping. If BloatyApp occupies it all and I start up another app to do something else, I'm prepared to tolerate the swap delay, but if BloatyApp is all I'm using and I start a file copy in a shell, I'd rather get back to what I was doing in BloatyApp than have the copy finish an immeasurable fraction sooner.
    • Re:God no... (Score:2, Interesting)

      by Anonymous Coward
      Mozilla is the application I know of that has that demonstrates that behavior under XP so I'd suggest using another browser if it bothers you that much.

      I use XP extensively and it is very agressive at swapping stuff out. However, I've never had the problems with other applications besides Mozilla.
      • Re:God no... (Score:4, Interesting)

        by 1000StonedMonkeys ( 593519 ) on Friday April 30, 2004 @09:54AM (#9018035)
        Java also has the same problem. Almost makes you think they do it on purpose.
      • Re:God no... (Score:4, Interesting)

        by ckaminski ( 82854 ) <slashdot-nospam@ ... m ['r.c' in gap]> on Friday April 30, 2004 @10:23AM (#9018340) Homepage
        Mozilla has the problem specifically because it's memory footprint gets so large with all those tabs. If you don't use process separated IExplore processes, you get the same problem with IE when it's footprint gets up around 70+ MB.

        The only way to stop this madness on XP is to turn off the swapfile. I'd REALLY hate to see Linux go down this route. Big bloaty applications need to stay IN MEMORY unless there is memory pressure being exerted on the system. That is the only time swapping should occur.

        • by trezor ( 555230 ) on Friday April 30, 2004 @10:37AM (#9018501) Homepage

          Whatever swapping scheme is used in Windows, I do not know, and I don't care what it's called either.

          What I can't despice, is the fact that I got >300MB free physical memory, and 20MB of the kernel is still swapped. Result? Do this, do that (any minor thing) and you have to wait for it to swap in.

          In the end, I have never ever seen a Windows-system without a partially swapped kernel, even with tons of free RAM available.

          This is just plain stupid, or is there some sort of "smart" explanation for this?

          I, for once, would hate having to turn off virtual memory, just to have the system kernel loaded at all times.... And GOD BE DAMNED if Linux takes the stame stupid design-decision.

          • In the end, I have never ever seen a Windows-system without a partially swapped kernel, even with tons of free RAM available. This is just plain stupid, or is there some sort of "smart" explanation for this?

            When you have a bunch of lazy, slacker, multi-megabyte services running in the background, waiting for that once-in-a-blue-moon event that requires their help (yes, I'm talking about YOU spoolsv.exe, you 3.98MB hog!), you might as well shove them into the swap file. Windows can end up with an unGODLY

    • Re:God no... (Score:5, Informative)

      by petabyte ( 238821 ) on Friday April 30, 2004 @09:45AM (#9017952)
      Actually, you can change it on the fly with /proc/sys/vm/swappiness Increasing the number will increase the agressiveness of the swapout. Mr. Morton runs with his set at 100 (the max). 0, I believe would turn swap.

      My kernel has autoswappiness enabled so it figures out the number on its own. I'm running at 64 ATM on a 256 Meg system (ram donations accepted) :).
    • Re:God no... (Score:4, Informative)

      by kinema ( 630983 ) on Friday April 30, 2004 @09:46AM (#9017959)
      All you need to do is: "echo 0 > /proc/sys/vm/swappiness" and the VM will do it's best to keep from swapping pages to disk.
    • Re:God no... (Score:5, Interesting)

      by Shakrai ( 717556 ) on Friday April 30, 2004 @09:48AM (#9017980) Journal

      Please, please, don't take Linux down the same braindead route as Microsoft has done for XP. It's utterly insane to swap out my browser so that a 2GB file can be copied two seconds faster when I then have to wait two minutes for the browser to swap back in

      Does it really make it faster anyway? Unless parts of that 2GB file were already in the cache then how is the cache going to make it transfer any faster?

      As a side note I haven't noticed Linux swapping much out in favor of the cache. My home grown samba/sql/dhcp/nat/intranet server has 768 megs of memory. As of today (43 day uptime -- Linux 2.4.25) there is only 2,528k in SWAP. 8,444k of free memory, 191,952k used for buffers, 296,004 used for cache and the rest for applications.

      I wouldn't mind seeing Linux swap out programs that aren't touched in several days/weeks (like the 12 agetty processes on my monitor less machine -- yes I know I could disable them if I wanted) but I definitely don't want to see it swapping out that browser I used 5 minutes ago in favor of increasing the disk cache size. Now if I launch Quake that's a different story.

      As far as the other posts about rule of thumb for swap size go -- I stopped using the 1:1 or 2:1 ratio a long time ago. I have a 256meg swap partition on my 768meg Linux box. That's pretty much as big as I go with swap spaces. Are you seriously going to setup a 768 (or worse x2) swap space? A) You'll never use it, B) If you do use it your machine will barely be useable.

      As far as XP's stupidity goes look under My Computer -> Properties -> Advanced -> Performance Settings -> Advanced and make sure both options (processor scheduling and memory usage) are set to "Programs" and not "background services" or "System cache". That may (or may not -- it is Windows after all) help you a little. On the flipside of the coin I discovered that I needed to reverse the memory option on my Windows 2000 Terminal Server to prevent stupid HP print drivers from sucking up 100% of the CPU and 90% of the physical memory.

    • Re:God no... (Score:2, Informative)

      by Elm Tree ( 17570 )
      If I recall correctly it's runtime tuneable. So us power users with 1+ gig can tune swapping down, and desktop distros can tune it up. That's my favorite part about linux, I can just
      cat 0 > /proc/sys/vm/swappiness
      and I have instant control as to the performance of my machine. In fact... I could even write wrappers to specific programs so that they can tune the system's swappiness to better suit them. I.E. Programs that use huge ammounts of memory, less swappy, programs with repetetive disk access more
    • i hear you.

      i have the same problem with Retrospec backup software. one run and XP swaps out everything, system, applications, Explorer, etc.., so i have to wait >5 minutes before i can use the system again. This is with 1 G of physical RAM installed.

      the solution would be a limit to the disk cache to a reasonable size. i can see that servers would want all RAM for caching, but desktops? probably not. there should be a limiting percentage, like 10% of RAM. 100M is plenty of disk cache for my use...

      be su
    • That's odd because the NT swapping strategy shouldn't do that (unless they've changed it for XP?).

      NT is supposed to maintain *small* disk caches to avoid the situation you're talking about, where as linux has always had a less conservative policy of using pretty much all avaliable ram for disk cache and pushing things out when needed.

      I would actually be pretty surprised if that was the case... the os SHOULDN'T kick programs out for disk cache except under extreme situations. For all the shit we give MS

      • Re:God no... (Score:3, Interesting)

        by ckaminski ( 82854 )
        Windows 2000 and XP both give preference to the cache, no matter what your system preferences are. I've had a network copy/backup going while trying to run Word, and the damned OS consumed 300 of 500MB of memory for the disk cache. It's a problem I've been trying to rememdy for a long time now. Supposedly there's a registry setting for the cache, and a size limiter, but I've not been able to get it to work...
    • Or at least provide some kind of '#define STOP_VM_SWAPPING_STUPIDITY' so that I can disable it.

      like /proc/sys/vm/swapiness
    • Re:God no... (Score:5, Informative)

      by The Spoonman ( 634311 ) on Friday April 30, 2004 @10:51AM (#9018664) Homepage
      Right-click My Computer -> Advanced -> Performance -> Advanced -> Memory Usage. Set to Programs. Now, click Change under Virtual Memory. Set your cache size small. For 1G of RAM, you prolly don't need a biggun. I usually set it to 100M for Inital and Max and then up it based on how often the machine swaps.

      Then, make the following changes to the registry:

      HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutdown, set to 1. I don't shut my machine down very often, but occasionally XP will increase the size of the pagefile if it absolutely needs to depending on circumstances. This forces it back to the size you want it when you restart.

      HKLM\System\CurrentControlSet\Control\FileSystem\N tfsDisable8dot3NameCreation, set to 1 ONLY IF YOU USE NO 16-BIT APPS ON YOUR MACHINE. Speeds up writes.

      HKLM\System\CurrentControlSet\Control\FileSystem\N tfsDisableLastAccessUpdate, set to 1 if you don't care when files are accessed. This is rarely needed, and the setting speeds up writes.

      HKEY_LOCAL_MACHINE\System\CurrentControlSet\Contro l\Session Manager\Memory Management\IoPageLockLimit. Little more complex:

      Set to 4096 if you've got more than 32M RAM

      Set to 8192 if you've got more than 64M RAM

      16384, 128M

      32768, 160M

      65536, 256M

      131072, 512M

      This changes the maximum number of bytes that can be locked for I/O operations. The default is 512Kb. While the above are the recommendations, I've found stepping down one level to provide the most performance for my needs, YMMV. (For example, I have 256M, but I set my IO limit to 32768.)

      HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management\DisablePagingExecutive, Set to 1 to disable paging of the kernel.

      There, that wasn't so hard, was it? For those who want to flame that statement, keep in mind, that the information above is easier to find than some of the tuning suggestions I've heard for Linux. I've used Linux for 10 years, and only today heard about /proc/sys/vm/swappiness. Oh, and all of the above apply from at least NT4+.
      • Re:God no... (Score:3, Insightful)

        by Darth Daver ( 193621 )
        I have been using Windows for 12 years (and Linux for 10), and this is the first time I have heard of the obscure registry hacks you just listed. Besides, I thought Windows users argue they should not have to find, learn or research anything at all. It should just work, right?

        When I just searched for '/proc linux vm swap' in Google, /proc/sys/vm/swappiness was in the fourth hit from the top. There, that wasn't so hard, was it?

        I can tell you one thing. I would rather poke around the /proc filesystem th
        • Re:God no... (Score:4, Interesting)

          by The Spoonman ( 634311 ) on Friday April 30, 2004 @05:42PM (#9023089) Homepage
          I have been using Windows for 12 years (and Linux for 10), and this is the first time I have heard of the obscure registry hacks you just listed.

          The above hacks aren't for users, they're for administers and geeks. The average user will boot their machine, do what they have to do, and shut it back down. For those who aren't users, we like to leave our machines on for months at a time and these tweaks will help with that. If you were doing tech support, then you'd know them. If you ARE doing tech support and don't know them, please consider another field. These are the basics...IT's already filled up with enough paper MCSEs who can't spell NT unless it's in the 6-week course.

          When I just searched for '/proc linux vm swap' in Google, /proc/sys/vm/swappiness was in the fourth hit from the top. There, that wasn't so hard, was it?

          No, when you know EXACTLY what you're looking for, it never is. Now, search for +linux +performance +tweaks, and tell me if it shows up. Didn't, did it? Now, search for +windows +performance +tweaks. How many of those pages DIDN'T list the tweaks I just gave? Not many.

          I can tell you one thing. I would rather poke around the /proc filesystem than wander through the Windows registry any day.

          Because the difference is...? One's a collection of key-value pairs organized in a virtual filesystem analogy and another is a collection of key-value pairs organized on a filesystem? Or, is it because MS puts a warning that if you don't know what you're doing, editing the registry can fuck your system, but the Linux developers fail to give you the same warning?

          By the way, if you are not shutting your XP system down often, you must not be rebooting for the security patches, and that can be a problem for everyone.

          Could be, but I keep my machines fairly secure to begin with, and few of the security patches issues by MS affect well locked-down machines. They're more for user's PCs, like yours. Also, the last few security updates I've done haven't required a reboot. Unlike the latest kernel updates...

          claiming to release within hours versus the weeks they claim FOSS takes

          Or, years. How long was that latest flaw in the kernel sources that took down the Debian servers? Years? I thought the "many eyes" theory said something like that wouldn't reach production as there's so many people reviewing the code. I'll give you a clue: just 'cause the code's available doesn't mean many more people outside the development team is looking at it. Most are doing ./configure && make && su && make install and trusting it'll all be okay. It must be, right?
    • Re:God no... (Score:3, Insightful)

      by Fjord ( 99230 )
      The thig I hate about 2000 is that it seems to hold on to this disk cache RAM for dear life. Eventually, if you hibernate/unhibernate, or just don't turn off the computer, you swap for everything. It's a real pain in my assass well. I've disabled virtual memory on my windows machine at home (at work, I run too much to do so), just because it's not really needed and makes it much faster.
  • Memory access vs. disk access I mean?

    Back when P90s were the norm, was RAM access about as fast as disk access is today?
    • by Moderation abuser ( 184013 ) on Friday April 30, 2004 @09:38AM (#9017891)
      Well, disk access speed, say 5ms. RAM access speed 10ns so RAM is approx half a million times faster than disk.

      • Hard drives have seek time and maximum bandwidth. Memory has latency and maximum bandwidth. PC100 SDRAM (for example) has about 10ns latency. As you say, hard drives have much longer seek times than SDRAM has latency; usually between 9 and 20 ms. Hard drives typically transfer between 10 and 30 MB/sec; PC100 SDRAM which is 64 bits (8 bytes) wide has 100 MHz x 8 Bytes = 800 MB/s transfer (peak theoretical.) According to SiSoftware Sandra 2004 Pro, a Via KT133 chipset machine with a fairly fast AMD processor
  • by DikSeaCup ( 767041 ) on Friday April 30, 2004 @09:33AM (#9017827) Homepage
    I had this conversation with a fellow sysadmin, about the time that RAM was fairly cheap and we had a budget.

    She had just procured a new Sun machine with 2 GB of RAM. Mind you, disk space hadn't grown all that significantly and you could still get machines with 9 GB drives.

    The original practice was to make swap 2xRAM. So when the student she had putting the machine came to her and said, "What do I make swap?" she responded "Twice the RAM."

    He said, "Are you sure? That's like almost half the boot drive."

    She thought about it for a second and said, "Oh, yeah. I guess just make it the same as the RAM."

    So this begs the questions: What do you make your swap now? When does your rule of thumb change? And remember when you could run a "fast" linux box on a P100 with 64MB of RAM and 128MB of swap?

    • About 2 years ago I discussed this issue with an OS guru. He was of the mindset that you should always have Swap space = 10xmemory.

      I find that Linux just isn't that good at paging. I never use a significant portion of my 2GB swap partition, and memory contention is still high sometimes. Hmm... Maybe I do need to adjust the swapability number.
    • I have 1.5Gb of ram in my main box, and I set my swap sixe to 2Gb on all my drives. (hard limit so it won't grow)

      I then run a defrag program that moves the swap file to the inner tracks of the HD's.
    • My normal practice is more of solid values. If ram =512 then swap =128. I only have 512MB DDR in my linux box with 128MB of swap. The ONLY time I get into swap space is when I run winex games. And my swap has still never gone above 60% full. Although with the new info from this article, I'm going to mess with my swappiness file and see what benefit tuning can give me. Although just having units from 0 to 100 are kinda vague values. I think the 2x ram rule is completely outdated.
    • We do understand that paging is different than swapping, and that Solaris has changed the memory allocators and algorithms multiple times across releases right?

      That said, you might want to look into a recent Solaris Internals book or course, and also look into the history of things like priority_paging and page coloring ..
    • The reason the original practice in Sun shops was to have swap be twice the RAM is that SunOS4 swaps real ram to swap space on a 1:1 basis such that the first n bytes (where n is the number of bytes of physical memory) correspond to the first n bytes of the swap file.

      When SunOS5 rolled around, this was no longer necessary, and your swap is additive, so you only need as much swap as, well, you actually need.

      On my linux firewall system with 256MB real RAM, I have 512MB swap space. On my Windows system with 1GB real RAM, I have 768MB of swap space. This number is actually a hold-over from when I only had 512MB of RAM, I could probably decrease it to just about nothing now.

      Amusingly enough my system has ~480MB of real RAM free, and is using 701MB of my paging file. Go windows! Like I need 480MB free all the time. Still, it is nice not to have to swap something out if I start a big application - but Windows is awful about returning from swap.

      Some other more or less useless data points: My Indy (running gentoo) with 128MB has 256MB swap, which has been enough. I probably could have gotten away with 128MB but believe it or not my primary concern is whether I'll be able to compile some of the biggest C++ programs without the larger amount of swap. Certainly 128MB will not do it, even when you are booted from the gentoo installer CD and there's nothing much running.

      • Some other more or less useless data points: My Indy (running gentoo) with 128MB has 256MB swap, which has been enough. I probably could have gotten away with 128MB but believe it or not my primary concern is whether I'll be able to compile some of the biggest C++ programs without the larger amount of swap. Certainly 128MB will not do it, even when you are booted from the gentoo installer CD and there's nothing much running. Just an interesting tidbit here. Using knoppixDistcc on a box it rarely goes abov
        • I have a vm(ware) with gentoo in it which I use for testing of stuff I am afraid might summon satan all over my system, and I use it for distcc. It has 256MB real plus 256MB swap, since I use it for other purposes.

          I am planning to add more Indys to my stack and cluster 'em. A friend of mine has some R4600PC indys he's not using, and plans to give me a few of 'em, soon as I make a four hour drive to go pick them up.

    • by 13Echo ( 209846 ) on Friday April 30, 2004 @10:09AM (#9018171) Homepage Journal
      I don't normally make my swaps more than 512 MB on my Linux machines. In fact, when I had 1204 MB of RAM on my last machine, it only ever touched the swap once (when I was compiling Mozilla). The machine was so responsive with 1024 MB of RAM, it virtually never needed to use the swap with that much RAM.

      Now that I have a newer machine, and RAM prices have increased (had to replace SDRAM with DDR), I only have 512 MB in my home machine. It seems to be nearly as responsive, practically never needing to touch the swap. I've only ever seen it use a few MB of the swapfile. When partitioning my Linux drives, I almost always have more than one drive in the machine. HDA1 normally gets the root partition. HDB1 is normally my swap, at the front 512 MB of the drive, followed by home on HDB2. This system makes everything snappy.

      Even on my work machine, which is only a p3 450 with 256 MB of RAM, things operate quite well under Gnome 2. I have two drives in that machine as well, and the swap is on a seperate drive from the root partition. Programs can load from one drive while simultaneously swapping (if necessary) to a second drive. Even with Gnome 2 running, in addition to my browser and several other apps, only a few KB of space is being used on the swap.

      I can't see most desktop Linux users needing more than 512 MB of swapfile space, assuming that they have at least 256 MB of RAM. The general rule of thumb, though, is to put the swap partition at the front of the drive for the best performance, in the event that it does need to get used.

      I've really been impressed with Linux's memory management, even in the 2.2/2.4 series kernels. I've heard that 2.6 even makes some improvements as well. When I used Windows 2000, on the other hand, it INSISTED on using the swap even with a gig of RAM, even after I tweaked it for the best performance. I even used a RAID0 array, and Linux is still faster and more efficient at managing memory WITHOUT the RAID array. I was surprised that the array wasn't even really needed on Linux for fantastic disk access speeds with my 3 year old 7200 RPM drives.

      Of course, the rules will be different for server application. More swap is probably a necessary thing. It's possible, however, that users of Linux (on the desktop) may not even need a swapfile with more than 512 MB of RAM.
    • In the last two years I have had a lot of conversations about this with people, mostly because I often run Sun boxes with > 4GB of RAM. I have heard a lot of varying opinions ranging from 1.5x RAM to just don't bother with swap if you have a lot of RAM. I usually just deal with it by dedicating an entire 36 gig disk to swap in servers and use a much smaller swap partition on workstations.
    • by TheLink ( 130905 ) on Friday April 30, 2004 @12:18PM (#9019618) Journal
      Well, here's my thoughts on swap.

      First you should worry about how your O/S does "memory overcommit".

      Many O/Ses overcommit mem. How they handle the case when it turns out there really isn't any mem left (including swap) is what you'd want to know. Some O/Ses (and versions of O/S) effectively kill -9 random processes till there's enough RAM to run. Some applications intentionally allocate large amounts of mem and usually don't every use them. So they usually won't work if you have overcommit turned off (and not enough RAM+swap).

      If you having tons of swap just to avoid your O/S poor handling of mem overcommit, you may end up in a death spiral of swapping. Running processes page by page off your HDD isn't fun to watch (it's so 50s or was that 60s :) ).

      My HDD transfers at max 40-50MB/sec, random seek transfer maybe about 11MB/sec.

      At worst case how long does it take to swap out and swap in the largest process you'd ever have, given the speed of the HDD? Can you wait that long? Can the app wait that long? Will the machine be dead for practical purposes?

      So if you can wait 20 secs, maybe 512MB is ok, assuming the pig process only uses half or so of your swap (plus whatever physical RAM you have).

      But with a small swap, you may run out of mem and hit the memory overcommit scenario.

      I'd still keep swap - just so that when my machine runs out of mem starts slowing down, rather than slamming full speed into a hard wall.
    • "And remember when you could run a "fast" linux box on a P100 with 64MB of RAM and 128MB of swap?"

      Yes, that would be the day before I "upgraded" from Red Hat Linux 6.2 to RHL 9. (P200 64MB, swap partition on a separate HD) I use fvwm and I don't expect mozilla to be fast, but it really sucks when it takes several seconds to get the menu to pop up on an xterm. I have a pathetic fantasy that I will upgrade to a 2.6 kernel and my system will work as well as it did 5 years ago (and that I will get an X ser

  • Problem (Score:5, Interesting)

    by FreeLinux ( 555387 ) on Friday April 30, 2004 @09:34AM (#9017836)
    Personally, I just try to keep my memory usage below the physical memory in my machine, but I guess that's not always possible..."

    No it isn't possible. With today's RAM prices I almost always have more physical RAM than the system requires. But, due to aggressive VM swapping there are still hundreds of megs swapped out to disk when there is no need at all. This means that those applications, when their time does finally come, are slow because they must be retrieved from disk first. It's really annoying sometimes. Yet, even with excess RAM turning off swap is disasterous.

    • Re:Problem (Score:5, Interesting)

      by Trifthen ( 40989 ) on Friday April 30, 2004 @09:43AM (#9017935) Homepage

      No, turning off swap is not disastrous. We've turned it off on our production web server cluster that routinely serves 60Mb sustained traffic. We've turned it off because we have 2GB of ram in these machines, and Linux insisted on preferring buffers and cache over our running applications. Fuck that, we said. With over 1GB Of buffers and cache, we had RAM to spare; bye-bye swap.

    • Yet, even with excess RAM turning off swap is disasterous.

      I find that swap partitions in Linux and FreeBSD are just a nuisance once you've got enough RAM for your apps. Swap files are preferable because you can change the size and number of the files after installation. Swap partitions are just wasting valuable space on your HDD.

      I have 1Gb of RAM on my laptop and Linux, FreeBSD, Windows 98 SE and Windows XP all run fine without any swap partitions or files on my quadruple boot.

      The virtual memory alg

    • Turning off swap is only disastrous on windows (and older versions of Unix) which last I checked (I have not checked very recently) would allow you to entirely turn off the paging files, but will not boot if they are not present, whether you have enough memory to contain everything or not.
    • I thought the point was to swap things out when more RAM was needed than available. If big-app is using all the memory and I start something else, big-app goes to disk. Why swap if not needed? That would be like windows programs that load at startup - everything loads at startup so it will be faster when you want to use it, but that causes my startup time to be like 2 minutes...

      Don't swap until it's necessary seems the right thing to do. If IO isn't busy, you could send older data to disk, but you'd need a

    • Comment removed based on user account deletion
  • by nuggz ( 69912 )
    Personally, I just try to keep my memory usage below the physical memory in my machine, but I guess that's not always possible..."

    No, it isn't really. Unless you don't use your computer.
    In some cases it makes sense to use your physical memory as disk cache rather than for unused applications.
    Swap out that sshd, and give the database server more memory. Swap out that screensaver and email client, give quake more.
  • So, what... I want my apps paged out to disk so that I can wait for them to be loaded back in when I switch over from Mozilla to Open Office?

  • The big issue (Score:5, Interesting)

    by MrIrwin ( 761231 ) on Friday April 30, 2004 @09:36AM (#9017858) Journal
    The main cuase of memory usage on Linux is the use of many different shared libraries and not bloated apps.

    I think developers could do more at a library level. For example.....dare I suggest using common sub libraries within libraries, that is people like KDE and GTK get thier heads together and say "are thier functions we include in our libraies that could just as well be linked to an underlying library?"

    • Re:The big issue (Score:3, Informative)

      by turgid ( 580780 )
      dare I suggest using common sub libraries within libraries, that is people like KDE and GTK get thier heads together and say "are thier functions we include in our libraies that could just as well be linked to an underlying library?"

      Well, you see, KDE is written in C++. GTK is C. C++ stuff does not play well with different version of the same compiler let alone different compilers or even different languages.

      In theory you're only "supposed" to use either GNOME or KDE and therefore only have one set of libr

    • Re:The big issue (Score:3, Informative)

      In fact, one big problem is the way that the loader performs relocations on C++ libraries. Google it. It's why KDE apps take a few seconds to load (and used to be even worse). IIRC the main problem is that many objects (function, variable etc.) need to be copied into the address space of each application using it, so that the sharing never happens in practise.
  • by Pharmboy ( 216950 ) on Friday April 30, 2004 @09:37AM (#9017861) Journal
    Personally, I just try to keep my memory usage below the physical memory in my machine, but I guess that's not always possible..."

    I keep my memory usage much below the total ram on the servers, but in real life, the machine still swaps. This is because even tho the machine NEVER needs more ram than is available at any given time, over a period of days, it will use more than the available ram. It caches out the old data that was used 12 hours ago.

    Unless you reboot every day (as in a client machine) you will use swap on just about any machine. Using swap is not bad. Using swap for a currently running application is not so good. This isn't a bug, its a feature. Reading data from swap after it has been accessed is still faster than reading new data from the drives, especially if its a network drive.
    • Using swap is not bad.

      No it isn't, but constantly swapping a lot things in and out is, and you'll notice a considerable slow down of your machine.

      And that's when you need to consider buying more ram.

  • by YetAnotherName ( 168064 ) on Friday April 30, 2004 @09:37AM (#9017862) Homepage
    You really don't want hundreds of megabytes of BloatyApp's untouched memory floating about in the machine...

    Why not? BloatyApp, if it's that bloaty is probably an object oriented program with template instantiation (or is by Micro$oft); these programs are notoriously huge, but also have notoriously poor locality of reference. The user will get better perceived response if you can keep more of BloatyApp resident.

    If there's space in memory, I don't see the point of pre-emptively ejecting as many LRU pages of BloatyApp. (Of course, I haven't RTFA, but this is /. so you're not supposed to!)
  • by Trifthen ( 40989 ) on Friday April 30, 2004 @09:37AM (#9017866) Homepage

    Ah yes. It's all the fault of bloaty apps. Apps like database daemons and high-traffic httpd daemons. We've turned swapping off on our servers because we were sick of seeing almost a GB of cache/buffer memory, while it was swapping 500MB of shit to disk. Want a bloaty app? How about the linux Kernel? I love the thing, but Jesus Tapdancing Christ it would rather swap our starting DB process to disk, than free up the fucking buffers and cache. Is there something wrong with wanting it to give precedence to not swapping?

    • by Anonymous Coward
      Yeah there is nothing I love more than coming to an idle X console session on box I haven't touched in a while and watching it grind itself into oblivion because everything has been paged out.
    • You could always just tune the cache down to bugger all. It's one of the kernel parameters.

    • We've turned swapping off on our servers because we were sick of seeing almost a GB of cache/buffer memory, while it was swapping 500MB of shit to disk.

      Your server apparently believed that it was accessing that cache and buffer more often than that half gig of random pages. Do you have real reason to believe that it was wrong, or does that just "seem" bad?

      In other words, do you have actual numbers to demonstrate that your kernel was making poor decisions, or are you only fairly sure that it was?

  • by redelm ( 54142 ) on Friday April 30, 2004 @09:38AM (#9017893) Homepage
    Ever since I've had a 32 MB machine (1997), I've not bothered to even set up a swap partition. On the rare occasions when I need swap, I'll create a swapfile. Sure it's slower, but swap is already hugely slow.

    With read-only & demand code-page loading and copy-on-write even bloatware really doesn't eat memory. And bloatware has to be frequently restarted to recover the memory it leaks.

    Sure, there are some jobs that needs swap -- lots of seldom used memory pages.

    But not mine. I prefer to save myself the complexity and performance headaches.

  • VM you say? (Score:3, Insightful)

    by freeze128 ( 544774 ) on Friday April 30, 2004 @09:40AM (#9017902)
    At what point does VM stop meaning Virtual Machine and start meaning Virtual Memory.

    Or is it just the Virtual "M"?
    • VM meant virtual memory a long time before it meant virtual machine.

      Its the change in meaning of UML that I can't get my head around these days...
  • Other reasons (Score:5, Interesting)

    by Halo1 ( 136547 ) on Friday April 30, 2004 @09:45AM (#9017948)
    Another reason to gradually and pro-actively swap things out, is that when another program later needs a lot of memory, your system doesn't come to a grinding halt because suddenly a lot of stuff has to be swapped out at once (followed by zeroing all that memory, since you don't want to have one program leaking data to another).

    At least, that's the rationale I've read behind OS X's strategy of swapping things out long before all physical memory is used (and of keeping a pool of zeroed memory pages ready to fulfill most requests). Note that this does not require superfluous swap-ins if your reuse strategy is balanced properly, as the fact that something is swapped out doesn't mean that the memory which contained that data will be cleared/reused immediately (i.e., if it's needed again shortly afterwards, that page can be reactivated without having to go to disk).

    Under most desktop OS'es, programs can even give some hints to the system regarding their usage of a memory region using e.g. the madvise() system call.
    • My vote.... (Score:4, Informative)

      by tsmithnj ( 738472 ) on Friday April 30, 2004 @10:00AM (#9018081)
      is to do something like AIX does, where I can use "vmtune" to customize the percentages of memory I devote (hard or soft limit) to filesystem pages or computational pages. This way I can tune for my Bloatware, tune for file copying a la XP, or tune for my DBMS, whatever suits me.... The developers could take it one step further and provide a simple, understandable (as opposed to AIX's) interface for configuration......
    • Re:Other reasons (Score:3, Insightful)

      by Just Some Guy ( 3352 )
      At least, that's the rationale I've read behind OS X's strategy of swapping things out long before all physical memory is used (and of keeping a pool of zeroed memory pages ready to fulfill most requests).

      That's what FreeBSD's been doing for years, and for a long time kernel hackers spoke in awe of the much-vaunted FreeBSD VMM. Now that Linux has implemented a similar strategy, everyone's freaking out like it's some new ego trip that noone's ever tried before.

      The "new" system is what other OSes have be

  • Under the performance tab you can use the slider to tune the machine for 'foreground' or 'background' apps.
  • echo 0 > /proc/sys/vm/swappiness
  • by Anonymous Coward
    In the good old days, "chmod +t prog" told the kernel to leave prog in the swap partition even after it had exited. It was a way of making humongous programs like vi :-) more responsive on startup.

    In modern Unices (including Linux) last I heard, the sticky bit is ignored since everything is simply demand paged.

    Could not sticky bit be revived with some similar meaning? As in, "don't be too keen on paging these out?"

  • by buserror ( 115301 ) * on Friday April 30, 2004 @10:05AM (#9018131)
    I dont mind the kernel swapping out "old" stuff to grow a huge disk cache. Really, thats OK, it makes things faster for disk hungry processes allright.

    However, what I mind is the fact that the pages that are swapped out STAY there!
    Why not aging the disk cache the same way the RAM pages are aged ? On an idle machine, the disk cache would gradualy decay and be replaced by the pages back from the swap, and the machine would be all responsive again.

    It means that if the user leaves for lunch and a cron wants to eat all the disk, with some luck, when the user gets back, his machine is as responsive as it was when he left.

    I have a laptop with 192Mb of ram, I always hate when 2/3 of the ram is "free" while it takes 10 seconds for the kmail window to move to the front. Even if the machine has been idle for hours.

    I even regularly do a "swapoff -a;swapon" to claim back the cache!
    • I have a laptop with 192Mb of ram, I always hate when 2/3 of the ram is "free" while it takes 10 seconds for the kmail window to move to the front. Even if the machine has been idle for hours.

      I know what you mean, but in this case, it seems like your machine is making a reasonable guess: you haven't used Kmail in hours, so the odds of you wanting to resume using it at any particular instant is pretty low. On the other hand, reading from a drive is quite a bit faster than writing, so the penalty for incorrectly swapping out old pages when the system is idle is significantly less than incorrectly not swapping out old pages before users launch giant processes that want to allocate a lot of RAM very quickly.

  • Many people don't realize how smart modern page caches are designed to speed their system. Linux, MacOS X, Win2K+, etc. all boast aggressive page caches that make loading applications from disk more efficient.

    Without a swap file, the kernel has no place to stick memory segments that are rarely used. They stay in resident memory la-la land until the process is terminated. Those segments add up over time and erode the memory available to the page cache.

    Page caches are wonderful. When you load an application (like Firefox [mozilla.org]), you're not just getting the web browser. You're firing up a large chain of shared objects/DLLs that support the widgets, I/O, and components of the application. All of these components must be read into memory anyhow for program operation, so the kernel tends to just leave it in there for future use (the page cache).

    When you shutdown Firefox, you're also releasing the necessity of those libraries (provided nothing else is using them). Those libraries also remove themselves from memory. If you load another application (like Thunderbird [mozilla.org]) that uses the same type of libraries, the kernel will not have to go to disk in order to fetch those libraries. It will instead opt for the page cache contents.

    Turning off the swap file in the historic era of VM infancy was the best way to remove the hard drive bottleneck from system. The operating systems of yester-year did not have good page cache schemes that took advantage of all that unused memory. It is a little different now.

    Applications are so modularized that they are broken up into a billions of smaller libraries so that code can be shared. This increases memory efficiency by keeping a shared library resident for multiple processes. These libraries are frequently accessed, more often than many people realized. Getting THOSE into memory is better than making sure my 500+ Linux applications stay resident.
    $ cat /proc/meminfo
    total: used: free: shared: buffers: cached:
    Mem: 1055653888 1036296192 19357696 0 70488064 892309504
    Swap: 542367744 235892736 306475008
    Notice that on a web server with 1GB of RAM the Linux kernel is still putting things out to swap. These processes that stay asleep for long periods of time do not need to waste the memory that page cache is currently using (892309504 bytes or 753.7MB). What would be stored in that 753.7MB of memory? The database that drives the website (instead of having to seek the disk). The entire web page hierarchy used to display pages on the web site. All the scripts that are used to display dynamic content on the web site (etc. etc.)

    Now, if we subtracted from the page cache the amount of memory that was stored in the swap file, we would have over 200MB less that we could keep cached in memory. That could be an entire database that the kernel would then waste needless CPU cycles to fetch from disk.

    The only advantage to turning off a swap file on these modern machines would be for a machine that runs only a select few applications, and not having a lot of processes in the background doing things.
    • > Without a swap file, the kernel has no place to
      > stick memory segments that are rarely used.

      Anyone who runs Mozilla on Windows 2000 knows that if you minimize Mozilla for a half day, despite you having 756 MB RAM and not using more than 3-400 MB of it at any given time, bringing Mozilla back to the foreground takes anywhere from 2-6 seconds (depending on the speed of your disk), which is just idiotic on a 2 GHz home machine with that much RAM.

      There is no reason what-so-ever that the OS should be s
  • by dorfsmay ( 566262 ) on Friday April 30, 2004 @10:06AM (#9018143) Homepage
    Hopefully they'll use something modern like ARC [python.org] that tries to keep in cache stuff that have been read at least twice, NOT LRU !!

    AIX uses LRU today, so when you do a backup, the system tries to keep all filesystems in cache (well that what was read last !!), and will happily swap your apps out to disk in order to do so (with default tuning parameter).

    I fondly remember the days when I was running Linux with no swap, none whatsoever...

  • by stuffduff ( 681819 ) on Friday April 30, 2004 @10:07AM (#9018153) Journal
    Programmers have put a lot of time and effort into the VM swapping algorithm; mostly with the intention of being prepared to have a lot of memory ready and waiting for the next thing it will be asked to do. Unfortunately that's not so much of an issue with cheap ram and disk storage and faster and faster front side buses. What we really need is more intelligent swapping, which can only come about when the VM gets a set of API hooks (would make for a great 'shared object') that would enable the system administrator (and maybe someday the end user) to assist an intelligent VM manager to establish priorities and consistently respect those priorities.

    Unfortunately the current crop of best guess VM managers end up denying the end user the experience of their computer's peak performance. Coupled with the horrible state of application bloat, modern 'state of the art' hardware and software combine to give us less and less in terms of overall performance. Software developers throw more code at the cpu to add functionality with little or no concern for performance. And hardware manufacturers add more and more 'special instructions' and 'pipelining' which the majority of software is completely unable to access. If anything it's more like a bunch of dysfunctional co-dependents than an industry that is cogent as to what really needs to be going on. If the folks dealing with processors and the application software could take a page from the gamers (look at the high levels of integration between game engines and video cards) for example, and more effort put into consolidating functionality in dlls and shared libraries; we would be amazed at how truly fast these machines could perform.

  • I don't know if Linux does this at all, but it seems that one useful VM strategy would be to copy to disk rather than swap to disk. That way you can continue to run bloaty app without swapping it back in, but interactivity is still good since you can resuse memory immediately without needing to swap stuff out at the point the demand occurs. Of course it'd need to be tunable and/or smart (no point copying highly volatile areas of memory for a start).
  • Not amused (Score:4, Interesting)

    by MrLaminar ( 774857 ) <laminar.linuxmail@org> on Friday April 30, 2004 @10:11AM (#9018195)
    Actually, I haven't been very impressed by the whole swapping thing under Linux lately. I'm running 2.4.22 with a 400MB swapfile.

    Some apps _can_ make the system unresponsive enough to ignore keystrokes, which is *very* annoying. At other times, xmms will stop playing while the disk goes crazy... Switching from emacs to Firefox after 10 minutes usually takes an extra 5 seconds to redraw the window and load all the stuff again.

    Running GNOME2 on this laptop is also quite noisy on the disk. It swaps all the time...
  • 'My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You really don't want hundreds of megabytes of BloatyApp's untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful.

    This point is useful, but only if free RAM is at a premium. For the most part, on servers, there will be sufficient RAM to support the on board applications, and the amount of free RAM remaining will be able to handle the variable load of a standard workday

  • It seems that the original thread was not about swapping in or out, but about the amount of cache that is used by the kernel.

    I just have the same problem. I have 2G RAM, and I run my KDE desktop, some standard server programs and some UML instances.

    When I create UML instances (eg. 8G image) then my memory gets full and is not easily reclaimed.

    I agree with the philosophy of the buffers and the cache, to speed up IO operations for recently accessed files, but I do not agree with the time that they are i

  • Swapping back in. (Score:4, Insightful)

    by AlecC ( 512609 ) <aleccawley@gmail.com> on Friday April 30, 2004 @10:42AM (#9018548)
    I ffel that there should be some tunable propensity for applications to swap back in. Generally speaking, disk cache is most effective over a pretty short timescale - seconds or a few minutes. It is vey effective with a multi-pass compiler to cache the output of one pass so it can be read in by the next. But this sort of thing has a relatively narrow window.

    So way you want to do is:
    • Apps which haven't been used for a time get swapped out.
    • Cached blocks decay with time, decaying faster if the system idles a lot (presumably the big jobs have stopped), slower if the system is very busy (more likely there is something to re-use cache)
    • As cache blocks decay out, BloatyApp is gradually sucked back in. In Gui Environments, the Window Manager flags the pressure to return as proportional to (say) the number of pixels of visible screen it occupies. Of course, having swapped out once, if it never restarts, you can throw it out second time if you need your cache back.

    So if the guy goes to leaving a big make running, it gradually pushed the big apps out while it runs. But if the big make completes, the apps start crawling slowly back in. If it hasn't finished when he comes back from lunch, he probably wants it to carry on running the make: since the CPU is at 100% load, he is probably not surprised it is sluggish.
  • by chrysalis ( 50680 ) on Friday April 30, 2004 @11:03AM (#9018784) Homepage
    http://00f.net/item/14/
    describe why swapping is _good_.

  • by squarooticus ( 5092 ) on Friday April 30, 2004 @11:37AM (#9019122) Homepage
    Best performance improvement I ever got with the 2.4 series kernels was shutting off swap. My machine immediately became more responsive. From that point forward, I wouldn't come back to the machine after an hour away and encounter a jerky X mouse cursor because the instant I turned off the screensaver the kernel had to page all 128MB of my applications back into the 512MB RAM because it decided buffer cache was more important than code.

    The 2.4 VM changes causing this behavior were awful, and it's too bad that I have to sacrifice a large (disk-based) physical address space, but I'm not going to put up with my applications being paged out when I have 4x as much RAM as code I'm running. Just allowing the system admin to put a limit on the size of the buffer cache would probably solve most of my problems, but instead I have to turn off swap. Too bad.
  • Keep two copies (Score:3, Insightful)

    by kasperd ( 592156 ) on Friday April 30, 2004 @11:42AM (#9019184) Homepage Journal
    Swapping out data before you need the free RAM would be a great idea if you kept two copies. One copy on disk and one copy in RAM. In fact it would be fine if the system swapped out 90% or more of the process memory this way. There will now be three different cases to think about.
    1. The process needs to read the page - no problem, one copy is in RAM just read it, and keep both copies.
    2. The process needs to write the page - no problem, you can modify the copy in RAM and discard the copy on disk. Notice that discarding the copy on disk doesn't require any disk access, as the list of swap allocations will typically be in RAM (it is much smaller than the swapspace).
    3. You actually need memory - no problem, discard a not recently used RAM page, you still have a copy on disk.
    The only problem is, that you need to make the page readonly, so you can trap the write and discard the on disk copy. In other words don't do this for pages that are frequently changed. But usually you don't have many pages that are frequently changed, and you certainly don't want to swap out those you have. And should you occationally happen to swap out one, it is not really a major problem. It will cost you a pagefault, but no disk I/O. And a pagefault is compared to a disk I/O. A system that behaves like I have described here would use a lot more space than Linux typically does, but still it should be faster. I wonder why this isn't done more often, it is not like the idea hasn't been known for years.

    Another problem that many have noticed, and that isn't easy to deal with, is heavy diskaccess causing the cache to grow and stuff getting swapped out. Yes even some Linux versions suffer from this problem. A Red Hat 9 system I had running for months was really slow in the morning, because all the programs had been swapped out while cron jobs where running during the night. But you never know when it is a good idea to swap the stuff out and when it is not. When the disk access is going on, the process page might not have been used for hours. But still you might want it to be kept in RAM. File pages that have been accessed just once shouldn't be kept in cache for long time. But of course you shouldn't remove them unless the memory was needed for something else. Removing the pages too early is also bad, because you wouldn't notice, that this was really a page that was going to be accessed frequently. Some people are fanatic, and don't want process pages to ever get swapped out to make room for cache. That isn't a good idea either. You can really have process pages that may not be needed even once, do you want such a page to be kept in ram for months just in case? And notice how disabling swap is not going to solve the problem. You still have to think about memory mapped files, that in many ways must be treated like anonymous mappings.
  • by kmankmankman2001 ( 567212 ) on Friday April 30, 2004 @11:52AM (#9019298)
    The universal IT answer of "It depends" applies here as well. Yes, having Mr. Bloaty App glob onto scads of memory that are then not referenced for long periods of time can have a negative impact on other apps if the system becomes memory constrained. And, Yes, if the memory manager swaps a bunch of unreferenced memory out to disk and Mr. User has to wait a long time for Mr. Bloaty App to become responsive because it was his memory that got swapped out, that's a problem, too. The ideal is to be able to address this (haha, bad pun) at the application level and not simply at a global level. This has been the standard on the mainframe (MVS, OS/390, z/OS) operating systems for a long time, where there is a very sophisticated virtual memory manager. If there are, say, a 100 apps and 2 of them are very sensitive to response time, most of them aren't, and 10 are just dead dogs you couldn't care less about how nice is it to be able to actually tell the system that? The 2 "loved ones" then receive preferential storage treatment at the expense of the other, "less loved ones" and the dead dogs are always first on the pecking order of who to steal storage from. The memory manager then is acting to maintain the responsiveness of the applications (the reasons we run OS's in the first place) to meet the needs and expectations of the user(s) (the reasons we run the Apps). Without that ability, arguing over "more swappy" vs. "less swappy" when it's only applied at a global, default, level is not especially productive except within the context of attempting to establish, perhaps, where the best general-use default happy setting is - for the general-use default system we all use (is that you? I know it's not me).
  • by ChaosDiscord ( 4913 ) on Friday April 30, 2004 @11:57AM (#9019374) Homepage Journal
    Personally, I just try to keep my memory usage below the physical memory in my machine, but I guess that's not always possible...

    I've seen a number of posts echoing this point, overlooking one of the key reasons for swapping. It's not just because you're out of memeory for applications, it's because sometimes there are better things to be doing with your memory. Mainstream operating systems use otherwise unused memory to cache disk access, dramatically speading things up. If you've got an process that hasn't been run for a a while it may actually be more efficient to swap it to disk. This frees up memory to cache data that may be being hit quite frequently. inetd hasn't been needed for a while? Swap it out so that your disk cache is larger, benefitting your heavily used web server.

    To be fair, when to make that trade off is very tricky and will never work perfectly 100% of the time. Inevitably you'll occasionally be burned by a bad decision. But there are real benefits. The real question is not how to turn it off, the question is how to improve it and perhaps how to allow users to tune it for their needs.

  • adaptive algorithms (Score:3, Interesting)

    by mugnyte ( 203225 ) * on Friday April 30, 2004 @01:13PM (#9020263) Journal
    Can someone please describe any adaptive algorithms that could be used. Specifically, I'm thinking of:

    - dirty marking unreferenced pages when swaped. if these mem pages are not used after the swap out, no need to swap them in again. i'm prety sure this already occurs

    - for process using high swap demands, increase their weighted priority for pages, with a window-averaged for swaps. so then, my database process could hog under load while my less-used apps may swap because they're used less often. could be taylored differently for code versus data segs.

    - page-impage comparisons to avoid holding duplicate code segment pages in memory. this plays with the concept of shared libs a bit, but could avoid duplicate pages, especially if this information is saved in a precalc'd hash table that is stored.

    just ideas.
  • by Peaker ( 72084 ) <gnupeaker @ y a h oo.com> on Friday April 30, 2004 @01:27PM (#9020442) Homepage
    Is a difficult dilemma, but that's because an overly complicated scheme is used.

    There is a simpler and more powerful scheme that unifies swapping and disk caches, while allowing applications to persist between reboots, all with better performance than current systems!

    EROS implements [eros-os.org] such a system. Generally it is referred to as "Orthogonal persistence", and functionally it behaves as though the computer is "always on", and returns to the exact state it was in after a reboot. The thing is, with orthogonal persistence, the structure on the disk is not a file system, but just the application data.

    Since applications no longer work with the disk explicitly (open/read/write) but only with one type of memory (persistent memory), the OS manages all of the disk I/O, and it allows it to eliminate almost completely the largest delay in disk-work - the seek time in all writes. Since all application memory is just mapped to disk transparently, all RAM is just considered a "disk cache", and the kernel does not have to make nasty tradeoffs between disk caches (of explicit open/read/write calls) and virtual memory.

    Of course there is still a problem if large work-areas of unimportant applications "swap out" smaller areas of important applications. I suggest solving that by prioritizing pages to the memory manager. In a system like *nix it is not a problem. In more secure systems however (EROS, for instance), it may create additional covert channels between applications so it was avoided.
  • by avij ( 105924 ) * on Friday April 30, 2004 @02:23PM (#9021029) Homepage
    My approach has been to start all the needed services and then run this small perl script (which I named memhog.pl) to create a process that hogs quite a bit of memory:

    #!/usr/bin/perl -w
    use strict;

    my $a = "xxxxxxxxxx" x (131 *1024*1024);


    This is just a quick hack, you may want to adjust the size to suit your memory size. The server from where this script was copied has 2GB of memory. Essentially I want to page out all the stuff that doesn't get used after starting the server and the related server processes. Of course, given enough time the server would swap out those pages anyway, but this method just does it quicker. After the script has been run, the server will gradually swap in those pages it really needs. OK, doing this may be pointless but I don't care ;)
  • by shaitand ( 626655 ) * on Friday April 30, 2004 @06:44PM (#9023669) Journal
    Sorry but this is not a complex equation and I think these guys are getting wrapped up in too many details and missing the big picture.

    The harddrive is really, really, really, really fscking slow. In comparison Ram is really really really fast. As a result, you want to interact with the hard drive as little as you possibly can, and interact with ram instead as much as you possibly can (the only thing which beats that is interacting with only the cpu registers and avoiding ram and harddrive altogether).

    As is, linux doesn't even begin touching the disk until there is only enough ram left to turn on VM. Now this has a negative impact when that limit is reached because there is overhead turning it on.. this impact is negligable and tweakable since you can wait and see if you hitting the limit, add more memory, see again and reevaluate until you simply aren't swapping. This is a good thing.

    One of the worst things windows does is swap constantly. In fact beyond a certain point (read enough ram to run an XP desktop) the system swaps MORE if you have more ram. You boot the system with all uneeded services turned off and no startup processes and all the eyecandy turned off. And you've got 4gb of ram in the system, guess what, it's already using VM.

    Maybe VM management itself could be tweaked more, but it certainly shouldn't be used unless it absolutely has to (and if you don't have enough ram and it has to all the time then it's not like you suffer that performance hit more than once).

    The only exception to this I've found is a linux desktop running kde or gnome with about 256mb of ram, at that point the numbers seem to work out just about right(or wrong I should say) and the system is constantly turning VM on and off, encountering the performance hit again and again and again, with pretty much every operation you perform.
  • Sticky Bit!!! (Score:4, Interesting)

    by cgleba ( 521624 ) on Saturday May 01, 2004 @01:13AM (#9025889)
    Here's a solution to the whole debate -- make the sticky bit have meaning under Linux like it does on other UNIXen -- if the sticky bit is set on the execuatble, do not swap it. If it is not set, the executable is free to be swapped. This solves the entire debate (for instance, if you don't want the 'interactive' mozilla process swapped, set the sticky bit on the executable).

One man's constant is another man's variable. -- A.J. Perlis

Working...