Forgot your password?
typodupeerror
Linux Software

Is Swap Necessary? 581

Posted by michael
from the got-swap? dept.
johnnyb writes "Kernel Trap has a great conversation on swap, whether it's necessary, why swapless systems might seem faster, and an overall discussion of swap issues in modern computing. This is often an issue for system administrators, and this is a great set of posts about the issue."
This discussion has been archived. No new comments can be posted.

Is Swap Necessary?

Comments Filter:
  • IMHO (Score:3, Interesting)

    by rd4tech (711615) * on Saturday May 29, 2004 @01:05AM (#9283378)
    One can have 1GB of RAM for a fairly cheap price.
    I really doubt that majority of newest desktop PCs need to swap on the HD at all.

    The unused/used portions argument from the article isn't quite true. You don't have to swap every unused bit,
    if you have enough RAM, leave everything there. It's R-A-M. don't access parts you don't need.
    If you don't have them in the RAM, read them from the drive,
    don't waste time putting them where they mostly are in the first place.

    I'm willing to bet that people who need performance, don't often run 10 applications at the same time. If they do, they
    surely know what are they doing.

    IMHO the average user should get enough RAM and no swap, let the OS optimize things a bit.
    • by robslimo (587196) on Saturday May 29, 2004 @01:17AM (#9283428) Homepage Journal
      but today's production, heavily loaded system will still need the ability to swap to/from disk.

      Already, there are systems that minimize that need, set-top boxes, embedded systems in general. But each of those is seriously modified (kernel-wise, mostly) to achieve the responsiveness, the frugality of resource treatment that a general purpose desktop computer can't expect to enjoy.

      That doesn't mean that developers should stay in the same rut, assuming that hardware that confined system design in the '60s, '70s... '00s will perpetually assign similar constraints.

      IMO, desktops still need to swap... for now. but let's not paint ourselves into a performance corner.
      • by Goldberg's Pants (139800) on Saturday May 29, 2004 @01:29AM (#9283478) Journal
        If you've just got a box sitting not doing much, in other words not serving pages, SQL or whatever, you can run with minimal ram. My laptop has 24 megs of ram. I did have a 100 meg swap partition, but needed the space for a particular huge DOS game I wanted installed, so nuked it and converted it to DOS. Booted Linux and checked the ram usage and most of the ram was used.

        However, when I ran a program, the amount of used ram DROPPED.

        Of course in an environment where the system gets hammered, it's all very well talking about how cheap ram, but so is hard disk space. Is it really worth not setting up a bunch of swap space? What if a rogue process munches it's way through the ram while you're away? Would it not be better to have swap space and have it so the system can run, albeit not very well, than just die on you?

        I don't know, I ain't a sys admin, but performance issues aside, I don't see why you should risk it. I'd rather have swap partitions on a hardcore system than not.
        • by Proud like a god (656928) on Saturday May 29, 2004 @08:37AM (#9284455) Homepage
          Surely if your system runs out of RAM it shouldn't die? The runaway process, sure, but the OS should be able to reclaim some RAM from that and manage to carry on, no?
          • by krewemaynard (665044) <krewemaynard@@@gmail...com> on Saturday May 29, 2004 @12:18PM (#9285159)

            "...the OS should be able to reclaim some RAM from that and manage to carry on, no?"

            yeah, pretty much. i may be openly admitting my ignorance here, but i have a gentoo box with 256 MB DDR RAM. i set up a swap partition, and set up the entry in /etc/fstab, but when i ran the box, it never touched the swap. me: "great, this RAM ownz! and the 2.6 kernel ownz at memory management! it never uses swap!"

            then, stuff started dying on me during times of heavy system load. like, i'd be in Firefox and running emerge at the same time, and firefox would croak on me. or VMWare wouln't boot Knoppix. or OpenOffice would die. all this time i had something else going on on top of whatever memory-intensive program was dying. me: "wtf, mate?"

            it finally occurred to me that the box still wasn't swapping, and that might be a bad thing. so i tried to run "swapon /dev/hdb2" (my swap partition) and got errors. weird. then i realized what happened...

            ...i had forgotten to officially make the swap partition using mkswap. i'd been running this partition schem for about 2 months, and never realized it until about 2 days ago.

            all that to say, yes. the system DOES reclaim memory. by killing other stuff :)

            i'm a stoopid monkey sometimes :-D

            • The system only does that when it runs out of unused buffers though. This means that at the time a program has to be killed all of your ram is officially used by processes.

              I think the OP is probably right. I have 512 megs of ram and using my full suite of apps (let's say something like firefox, thunderbird, a seti process, xmms, irssi in an rxvt and maybe some other xterms etc) I'm using about 200 megs of ram (not counting unused pages). So there was no real need for me to have swap since it never, ever ge
      • Swap sucks. :) (Score:5, Insightful)

        by MikeFM (12491) on Saturday May 29, 2004 @02:08AM (#9283660) Homepage Journal
        I've built many servers, embedded systems, and even desktop systems that don't use any swap at all. Many more I limit the amount of swap greatly. The overall responsiveness is much better if you don't use swap and I find system stability to be better. Really it doesn't matter what the systems are used for or how many apps are being ran.. it's just how much memory you're going to use compared to the amount of physical memory you can afford. You can run out of memory just as easily using swap as you can while limited to physical memory.. the main difference being that the recovery of the sitution is much worse in the case of using swap. Quite often the system starts to churn and then grinds to a halt. Without swap those tasks just die and everything else keeps running. Setting memory limits on tasks is a good way of ensuring which tasks are killed first but I'd like to see better control of this given to the admin.
        • Re:Swap sucks. :) (Score:3, Insightful)

          by oolon (43347)
          There are good reasons for swap, for example when a program forks, You need spare ram for the complete process space, this space normally comes from swap, before being wiped out when a new command is execed. Another good thing to do with swap space is use tmpfs and use it from /tmp, that way if you have lots of memory /tmp will come from memory not disk, and if your stuck for space your use the swap space.

          James
      • Reasons for swap... (Score:5, Interesting)

        by emil (695) on Saturday May 29, 2004 @06:03AM (#9284172) Homepage

        I don't know if Linux works this way, but...

        1. The mmap() system call, which allows you to treat a file as an area of memory and manipulate it with pointers in C, oftentimes copies (portions of) the file into swap.
        2. Many systems, when you execute a binary obtained over NFS, will cache the binary in swap in hopes of preventing further transfers over the network.

        UNIX kernels have assumed the availability of swap for nearly 35 years. You cannot remove this major architecutural feature without unintended side effects.

    • Re:IMHO (Score:5, Interesting)

      by Trepalium (109107) on Saturday May 29, 2004 @01:22AM (#9283446)
      The other side of this is that memory that is not being used is wasted. Getting unused memory out of RAM, and into swap, so that memory can be used for real work can improve performance. This isn't just about memory that your applications are using. It's also about memory that is being used as cache for the disks you're using.

      Maybe you have enough memory to run your program, but you don't have enough memory to keep enough directory structures into RAM, so you keep needing to read the disk. If there are unused pages in that program that were only used once during startup, for example, it makes sense to get them out or memory, so that memory can be used for disk caching instead.

      Now, you have to understand how Linux handles paging, too. Unmodified pages from executables that are running may be discarded by the kernel at any time, because it knows where to get them. They won't be thrown into swap because it's not necessary. On the other hand, if that particular page has been modified (and some are modified as they are loaded by ld.so, for example), then the page must be copied into swap before it's discarded.

    • Re:IMHO (Score:5, Informative)

      by irokitt (663593) <archimandrites-iaur&yahoo,com> on Saturday May 29, 2004 @01:23AM (#9283452)
      Linux has two properties that make swap a good thing (TM).

      The first thing to remember is that, for many Linux users, they have a newer PC running Windows (or a Mac ;) and a less recent PC running a Linux distro. The RAM threshold is realistically around the 128-512 MB range. Those who are dual-booting on a brand new machine can use 1 GB, but the rest of us put up with less than that (I for one want to avoid MBR screwups and the hassle of communicating with NTFS, so I don't dual boot. I had a nasty GRUB incident, so I'm probably paranoid).

      Finally, every Linux user that has compiled a kernel knows that it can really tax a system. Gentoo users also know how strenuous a XFree86 or KDE/Gnome compile can be. Being able to work on another terminal while compiling is one of the most beautiful things about *nix, and to do that on anything with less than 512 MB or 1 GB or RAM you want to have some swap.

      And finally, while RAM is very cheap, so are hard drives, and how hard is it to squeeze a swap partition out of a hard drive? Can it really hurt that much to let the system use it?

      As for Windows, swap is absolutely required for a lot of the games that are out there. I've heard that Unreal-engine based games in particular make heavy use of swap filing.
      • Re:IMHO (Score:4, Informative)

        by zoloto (586738) on Saturday May 29, 2004 @01:46AM (#9283572)
        My take on the swap -vs- no swap.
        I have no data, links or proof to back it up. Only the system responsiveness under two different setups.

        Setup A) 300mhz intel, 256mb ram, 6gb hdd, swap 512mb.
        Taking the swap out of that setup and the system crawled to a halt, expecially when using X. Re-enabling it and it ran just fine.

        Setup B) 2GHz Intel, 1gb ram, 40gb hdd, swap 2gb.
        Taking out the swap in that machine and the system ran fine. Even running Half-Life: Counter-Strike via WineX by transgaming. Re-enabling the swap, and I noticed a little performance increase, but I couldn't measure it b/c I didn't know how.

        Just my take on it. Swap is generally a good thing in older machines, while in newer systems it isn't a critical thing to have. HOWEVER, I did not run many tests with and w/o swap. just basic use, and Setup B's HL:CS test was only to make sure!

        YYMV, KBD.

        -zoloto
        • Re:IMHO (Score:3, Interesting)

          by Amorpheus_MMS (653095)
          Setup B) 2GHz Intel, 1gb ram, 40gb hdd, swap 2gb.
          Taking out the swap in that machine and the system ran fine. Even running Half-Life: Counter-Strike via WineX by transgaming.


          Do try that with Far Cry, I'm curious whether you'll notice a difference there. That recommends 1GB RAM, and swapping unused memory is certainly considered.

          Personally, I think games are the one reason why swap is still very useful. You either run your programs on your desktop, or a game - not both. Getting enough RAM to hold everythi
      • Re: IMHO (Score:3, Insightful)

        by Black Parrot (19622)

        > Linux has two properties that make swap a good thing (TM).

        A third: Linux is a powerful and stable tool that makes it possible to run a dozen virtual desktops and stay logged on for a year at a time. So if you're a power user who leaves scores of applications open indefinitely as part of your ongoing work, kick some of them out to swap and leave them there until you get back on that project.

        I've added first one and then a second swap file, to quadruple the size of the swap partition I made when I in

      • Re:IMHO (Score:5, Informative)

        by CAIMLAS (41445) on Saturday May 29, 2004 @03:13AM (#9283822) Homepage
        Kernel compiles do -not- tax the system. They utilize it. There are relatively few process threads used during a kernel build, substantial CPU use per process, a fair amount of disk reading, and a large amount of memory swapping about and modification.

        Its perfectly reasonable to expect a system to be responsive while compiling a large project. That's not a "taxed" system. A slashdotted machine, which spawns thousands of apache processes, is taxed, however.

        Except for your statement that for "anything less than 512M or 1G of RAM you want to have some swap"... You want to have swap all the time, unless there's a power-related (ie, laptop) reason to not have it. As the article says, it's trivially demonstrated that there is a performance increase by using swap, regardless of how much RAM. It is, however, imperative to have swap if the machine doesn't have much ram, otherwise you'll run into some nasty results after a while. :)

        I'd agree with your last two sentiments. :)
      • Re:IMHO (Score:3, Insightful)

        by PacoTaco (577292)
        Finally, every Linux user that has compiled a kernel knows that it can really tax a system. Gentoo users also know how strenuous a XFree86 or KDE/Gnome compile can be.

        It shouldn't be, unless you have a low memory system and everything (including your swap) is on an older IDE disk that doesn't seek quickly. I often leave large builds running in the background on Windows, BSD and Linux systems with no noticeable impact on system responsiveness.

      • Re:IMHO (Score:3, Interesting)

        by Jeff DeMaagd (2015)
        I'm sick of the speculation. Maybe Linux has some key benefits that make swap useful on a machine that has more memory than it needs to operate. I'd like to see some evidence that those techniques actually make a jack of a difference or not.

        Is there anyone willing to take two identical machines and run a full Gentoo compile, with or without swap, with 256, 512 and 1028MB RAM installed, and time it? If swap really does make a difference, I think that sort of thing would help tell when swap is or is not u
    • I have 1G of RAM, and no swap. Works great, even with eclipse, maple, firefox, and other stuff open at once. Having 10 things open at once doesn't degrade performance under Linux. That's a feature reserved for Windows...
  • When I was running Linux on my 350 mHz Pentium II with 128MB RAM, you can dang well bet I wouldn't have made it without a swap partition. I probably would have gone back to Windows if swap hadn't existed.
    • by Anonymous Coward
      Not for everyone. I've got 1GB in my machine, and I don't think I've ever come near maxing it out. I've actually turned off the pagefile* in Windows and haven't had any problems other than Photoshop whining everytime I start it (even if it never uses more than 100MB of RAM, it still whines if there's no pagefile present).
      I don't use linux, so I can't say how well it'd work on my machine without swap, but I can't imagine it'd be any worse.

      * For the Windows-ignorant: a pagefile is the Windows equivelent of
  • by Greyfox (87712) on Saturday May 29, 2004 @01:07AM (#9283388) Homepage Journal
    You could make a big ramdisk and swap to that!
    • by whiteranger99x (235024) on Saturday May 29, 2004 @01:12AM (#9283416) Journal
      Yeah, just remember to allocate twice the amount of ram that you have installed! :)
    • by martin-boundary (547041) on Saturday May 29, 2004 @02:14AM (#9283674)
      I think this is the most interesting issue hinted on the mailing list.

      There are two "theorems" quoted: The first says that no matter what, if you have a size X of RAM used by the OS, and you add a size Y swap disk, you get better OS performance than if you only had X RAM.

      The second "theorem" says: if you have X RAM + Y swap disk, then add Y RAM and use that instead as the swap disk, then you get *faster* performance.

      The naysaysers now say that the second statement is misleading. Why? Because with X+Y RAM and Z swap disk, you'd get better performance again.

      I think this betrays an underlying assumption which I'm not sure is true, namely: X+Y RAM managed by the OS any way it likes is always better managed then X RAM managed by the OS any way it likes and Y RAM reserved for swap operations.

      In fact, let us suppose that the OS memory management is not optimal, ie when the OS manages X+Y amount of RAM, it does so suboptimally. Then it is possible that a different memory management scheme, e.g. X RAM used normally + Y RAM used exclusively for swap, may turn out to better use the available total RAM.

      So the theoretical question is this: is Linux's memory management sufficiently optimal that with an ordinary set of applications running, it can always make better use of X+Y amount of RAM than if it always reserved Y for swap? Alternatively, under what kind of running application mix is it true that reserving Y amount for swap yields a better memory management algorithm than using X+Y fully?

      • They are not theorems, but conjectures. A theory and a conjecture are not the same thing. No one to date as posted a theory.
      • I had to read that a couple times before I noticed the problem. You have the second theorem wrong. It should say: "X ram + Y swap is slower than (X + Y) ram with NO swap at all". Then, your question about managing X+Y ram wouldn't make sense because there is nothing to manage: either you run out of memory and apps die or you don't.

        Intelligent memory managment only affects performance if you have swap space. Swap space could be defined as storage which is slower than main memory. If all your storage

      • by ceswiedler (165311) * <chris@swiedler.org> on Saturday May 29, 2004 @01:20PM (#9285417)
        Adding RAM always helps. No one ever says that swap is BETTER than RAM. Having X+Y RAM is better than X RAM + Y swap. However, having X+Y RAM plus Z swap is better yet.

        Sure, add more RAM. But swap will always be useful, because there's always some stuff which is better off on the disk, because it hasn't been used in forever, and until your RAM is larger than your HD, you'll get better milage out of that RAM if you use it as a cache.
  • by Anonymous Coward on Saturday May 29, 2004 @01:07AM (#9283391)
    All the docs on Linux and swap amounts to use are from the days of 386s and 4 megs of ram!

    I want to know how much swap I should REALLY be using for a system with 1 gig of ram.

    Same for some of the kernel compilation docs. Maybe on a 4 meg system compiling that extra option might cause slowness but on a 500 meg system does an extra 30k in the kernel matter?

    Can we get some docs that aren't from the mid 90s!
    • Start running bunch of applications and seeing what happens with the memory and the swap. The swap hardly gets used at all if you have 1GB RAM. On the other hand, on my old 486 with 32MB of RAM, swap was the main thing.. Sometimes several hundred MBs
    • by MarcQuadra (129430) * on Saturday May 29, 2004 @11:55AM (#9285081)
      Use a swapfile instead of a partition. 2.6 kernels cache the location of the file, so there's no performance hit for swap files compared to swap partitions. I'll give a quick HOWTO:

      1. decide how much you want (you can change it later, I have 128MB on all my boxes with over 512MB RAM). The example uses 128MB

      2. #dd bs=1M count=128 if=/dev/zero of=/var/swap
      3. mkswap /var/swap
      4. edit /etc/fstab for: /var/swap none swap sw 0 0
      5. swapon -a
      6. There is no step six!

      But the best way to know how much swap you need is to peek at #top every now and then, or #cat /proc/meminfo and see how much you're using when you've got the system strained, use twice that amount, not less than 128MB.
  • by NerveGas (168686) on Saturday May 29, 2004 @01:08AM (#9283394)

    People like to claim that swap can always improve performance, by swapping out unused sections of memory, allowing for more memory to throw at apps or disk cache.

    Well, *most* apps won't just arbitrarily consume memory, so endless amounts of memory won't help. And disk cache gets you greatly diminishing returns.

    One of the machines I use has 3 gigs of memory. It will swap out unused programs, in an attempt to free up more memory. The joke is that it simply can't use all three gigs. After half a year of uptime, there's still over half a gig completely unused, because the apps don't take memory, and there's not that much to put in disk cache.

    Obviously, that's a pathological case. And there are pathological cases at the other extreme. But as memory prices keep dropping over the long run, swap does become less and less useful.

    steve
    • by Anonymous Coward
      One of the machines I use has 3 gigs of memory. It will swap out unused programs, in an attempt to free up more memory. The joke is that it simply can't use all three gigs. After half a year of uptime, there's still over half a gig completely unused, because the apps don't take memory, and there's not that much to put in disk cache.

      Yes, it is swappnig because it is trying to free up "low memory", of which you have less that a gig. This is unfortunately an inevitable failure case of Intel's brain damaged (
      • Yes, it is swappnig because it is trying to free up "low memory", of which you have less that a gig.

        Actually this sounds likely, but is it a good idea? Alternatively it could do a memcpy of your data from low memory to high memory. So now you have the choice between occupying the CPU to perform the memcpy, or occupying the disk controller to swap it out. But data that you could swap out is process memory, which you'd expect to be allocated from the high memory. So how do you actually reach a situation wh
    • by torinth (216077) on Saturday May 29, 2004 @02:08AM (#9283657) Homepage
      What about the basic situation of not setting a hard-to-describe limit on desktop users? Managing and disabling swap is great in controlled environments like servers and embedded systems, where the applications being run are limited and pre-determined.

      But on desktop systems, a user may want to use Word, Photoshop, Outlook, Internet Explorer, an anti-virus tool, 30 other system tray tasks and services, etc. Should this user sit there and add up the recommended RAM of each of every application she owns and use that as a guideline for buying? That seems a little over-complicated and wasteful. Most of the time, she won't be running every application, but she really should be able to when she wants to.

      The solution is to introduce a cheap storage tool to extend what's treated (by applications) as RAM--swap.
    • by Alioth (221270) <no@spam> on Saturday May 29, 2004 @05:19AM (#9284099) Journal
      The 2.6 kernel now has a swappiness setting in /proc where you can tell the kernel avoid swapping please (set it to zero) or swap like mad (set it to 100). Therefore you can tune your system to your specific needs. It'd be nice if they had a similar control for filesystem cache.
      • by joib (70841) on Saturday May 29, 2004 @07:13AM (#9284292)

        It'd be nice if they had a similar control for filesystem cache.


        You're missing the point. That's exactly what the swappiness setting does, indirectly. If you avoid swapping (swappines = 0), the system has less memory left over for filesystem cache. OTOH, if you set swappiness=100, the system has a lot more memory to use for file cache.

        The system always tries to use all available memory, and that's a good thing. The question is whether to use extra memory for file cache or for keeping pages in memory.
  • swap rule! (Score:5, Informative)

    by Coneasfast (690509) on Saturday May 29, 2004 @01:08AM (#9283395)
    the rule is swap should be 1.5x your RAM! ;)

    actually MS followed this rule, in win2k, the default swap size is set to exactly 1.5x your ram, was 176 for my 128mb system, and 384 for my 256mb system, not sure about XP though, someone fill me in
    (yes, some great minds working at MS)

  • no swap? (Score:4, Interesting)

    by hawkeyeMI (412577) <brock@nospAM.brocktice.com> on Saturday May 29, 2004 @01:09AM (#9283398) Homepage
    I ran linux without a swap file on 128 MB of memory a couple of years go. It was an accident, I didn't create a swap partition. I never had a problem (forutnately). Of course, I wasn't doing the heavy duty stuff I am now (scientific computation).
  • by kidgenius (704962) on Saturday May 29, 2004 @01:09AM (#9283400)
    If you've got 128MB of RAM you have plenty and therefore will have no need for swap space. I mean, isn't 640k enough for everyone?
    • by Coneasfast (690509) on Saturday May 29, 2004 @01:19AM (#9283438)
      isn't 640k enough for everyone?

      people constantly make this joke, but seriously, at the time BG said this, it was probably true.

      if today i say "1 gig ought to be enough for everyone" it is true, but in 10 years from now you will be laughing at this.

      he never claimed it would 'ALWAYS' be enough (unless there is more to this quote???)
      • BillG and 640K (Score:5, Informative)

        by steveha (103154) on Saturday May 29, 2004 @02:33AM (#9283726) Homepage
        Bill Gates never made the infamous "640K... enough for anyone" comment. Not only have I never seen it documented anywhere, but he was asked about it and replied that he never said that.

        He didn't see the Internet coming -- he thought MSN should be like CompuServe, because that was the top info service (before the Internet became big). And I remember some wild comments he made about the truly amazing, throbbing power of the 286 chip. So he's not an amazing guru with awesome predictive powers. But people keep beating him up about this bogus quote, and I'm tired of it.

        steveha
  • by rsmith-mac (639075) on Saturday May 29, 2004 @01:11AM (#9283410)
    As long as users can eat up more memory than they have available, and as long as hard drive space is cheaper than RAM space, swap will always be necessary.
  • by strredwolf (532) on Saturday May 29, 2004 @01:14AM (#9283419) Homepage Journal
    Swap improving performance... yeah. On slow systems and low memory, every byte freed up helps. But not swapping in the first place is good too.

    I'm now expermineting with replacing various tools with smaller versions, such as dropbear, udhcp, tinylogin, and buzybox. I'm also slowly writing up a "exec and restart shell afterwards" utility called PivotShell.

    Hardware wize, I have swap on a CF drive. 32 megs so far, but if I can afford larger CF drives, I'll format 'em as swap and use them.

    Why all of this? 40 megs swaps to HD, and on a laptop, any HD access sucks battery power. When you're using Xfree (or even Kdrive) and Firefox, you're going to swap. Period.
  • by Julian Morrison (5575) on Saturday May 29, 2004 @01:16AM (#9283426)
    Sometimes, when a process goes haywire, it will start munching RAM. If important programs like, say, sshd or X, can't malloc when they need to, they'll die ignominiously. Swap gives you the chance to kill the rogue process before your OS goes kaput. Its slowness can actually help for this.
    • by anshil (302405) on Saturday May 29, 2004 @02:00AM (#9283629) Homepage
      Thats not true, if the linux kernel gets out of memory it takes the list of all processes, scores them with memory usage, runtime etc. and then simply kills the process of highest score. In your case your RAM munching app. would just be killed by the kernel.

      I know that pretty for sure, since I modified that part of the kernel once for an embedded system, where we explicitly didn't want it to kill any process, but have instead to reboot in such a case. Since nothing is worse than having a half-functional system with some processes missing....
  • Try this with linux (Score:5, Interesting)

    by arvindn (542080) on Saturday May 29, 2004 @01:23AM (#9283451) Homepage Journal
    Notice how sluggish the system is after doing something disk-intensive like watching a movie. That's because the kernel is caching as much of the movie as possible to memory and swapping your running apps out. And kernel developers think this is a good thing, so it isn't going to change any time soon. IMHO for a desktop system this makes no sense, that's why I run my 1GB RAM machines with zero swap.
    • by EventHorizon (41772) on Saturday May 29, 2004 @01:41AM (#9283539)
      In the average case code and data _do_ tend to be accessed more than once. We would all be complaining a lot more if the kernel NEVER cached... remember the huge performance boost SMARTDRV made in DOS?

      So, frankly, the default kernel behavior is right.

      To fix the movie/updatedb/jumbo cp/etc issues see "man madvise" and check out MADV_DONTNEED. I am hoping applications will start using this syscall sooner, rather than later. The Linux VM can take a hint, and it's pretty easy to give it one.
      • The default kernel behaviour is WRONG. The whole idea of memory is to put in stuff that will be likely to be accessed again. How likely is it that you will be watching a 1 GB movie again?

        Of course, the Kernel will have no idea about watching movies, but it stil can distinguish this "unimportant" data from data that do needs to be cached. The most important way to distinguish this data from data that does need caching is how fast it is needed in the first place.

        When I do a grep on the kernel tree, you

    • by Lumpy (12016)
      IMHO for a desktop system this makes no sense, that's why I run my 1GB RAM machines with zero swap.

      fine for you being a typical home user not doing much with your PC.

      now with me editing 4GB video clips, rendering a 2Gb CG clip or trying to process a large rotoscoping project in film-gimp 1GB of ram is consumed 3 minutes after I sit down at that machine.

      I have 2GB of ram + 4GB of swap and I can easily fill it all up using wither Blender, Film-Gimp or any of the other tools I use.

      and I'm betting that m
  • I just don't get it. (Score:3, Interesting)

    by mcg1969 (237263) on Saturday May 29, 2004 @01:26AM (#9283465)
    Seriously, I don't get it. How in the world can swap ever increase performance.

    Specifically, suppose I have one computer with 1GB of RAM and 1GB of swap, and another computer with 2GB of RAM and no swap. Under what circumstances will the first computer be any faster?

    Now I suppose if the swap is used for other things besides memory space then I could understand it. But then it seems like a simple solution would be to allocate a fraction of RAM for those things. In effect, create a swap partition on a RAM disk :)

    Seriously, I'd appreciate some education here, but make sure you answer my specific scenario above if you reply... thanks
    • by sprag (38460) on Saturday May 29, 2004 @01:40AM (#9283535)
      The potential speed increase isn't seen when comparing 1G RAM vs a 2G RAM system. Its comparing a 1G RAM system with a 1G RAM system with swap.

      The gist of it is: with swap you can put things that aren't being used (like mingetty, gdm, etc) into swap to free up space for things that are running now. Without swap you have to keep the little-used processes in memory and you don't have as much 'free' space to use for things like caches.

      Its also important to note that the kernel will swap out code segments regardless of whether or not you have a swap partition: they get swapped out to nowhere. When they need to be swapped back in, the executable file itself is read.
  • by Stevyn (691306) on Saturday May 29, 2004 @01:27AM (#9283468)
    This may be slightly off topic...

    Running KDE 3.2.1 now, I notice it takes longer to open apps than it does in windows. Mozilla for example takes literally a few seconds longer to open each window than it did in windows. Another thing windows does is make it faster when you run an app right after you ran it then closed it. Say for example in windows I run mozilla, then close it, then open it. When it opens it the second time, it's almost instant. However in linux, it seems to take the same original amount of time to load it completely. I'm sure it has to do with an entirely different process of loading programs, but apps always seemed to open faster in windows than in linux, in my view.

    Then again, graphics used to be in the NT kernel and that's what made it appear fast, but lead to a lot of problems and crashes, so maybe the longer load time is worth the wait when compared to a reboot.
    • by Black Parrot (19622) on Saturday May 29, 2004 @02:36AM (#9283734)
      This may be slightly off topic...

      Running KDE 3.2.1 now, I notice it takes longer to open apps than it does in windows. Mozilla for example takes literally a few seconds longer to open each window than it did in windows. Another thing windows does is make it faster when you run an app right after you ran it then closed it. Say for example in windows I run mozilla, then close it, then open it. When it opens it the second time, it's almost instant. However in linux, it seems to take the same original amount of time to load it completely. I'm sure it has to do with an entirely different process of loading programs, but apps always seemed to open faster in windows than in linux, in my view.

      Then again, graphics used to be in the NT kernel and that's what made it appear fast, but lead to a lot of problems and crashes, so maybe the longer load time is worth the wait when compared to a reboot.
      Conventional wisdom is that Windows uses lots of hacks to make it "look" faster in the way you describe, without regard to the cost it imposes on other operations. I'm almost certain that XP keeps some applications in memory after you "exit" them. Sometimes I notice that something won't work after running certain big applications, suggesting that sufficient resources haven't been released. Also, sometimes a shutdown complains about an application that won't respond even after you've closed everything. I think they're hoaxoring people to think they got a fast system, when they're really just robbing Peter to pay Paul.

  • gmail (Score:4, Funny)

    by maxbang (598632) on Saturday May 29, 2004 @01:31AM (#9283482) Journal

    I use my gmail account as my swap partition. It's fully searchable and displays helpful advertisements every time I load fifty tabs in Firefox and OpenOffice goes idle. I don't know what I'd do without it. I'd probably be less of an unfunny jackass.

  • by Doppler00 (534739) on Saturday May 29, 2004 @01:32AM (#9283488) Homepage Journal
    Does anyone out there want to run a series of benchmarks with a few standard applications to prove/disprove whether disabling swapping improves performance?

    I'm tired of just hearing antidotal evidence on this. Everyone has their stories about turning off swap files and improving performance, but in what cases? Are there some users this would harm?
  • well one reason (Score:3, Informative)

    by discogravy (455376) on Saturday May 29, 2004 @01:37AM (#9283521) Homepage
    one reason you'd want swap on a system is to have someplace to dump/savecore information in the case of system crashes. Kind of hard to do with volatile memory.
  • It's a choice... (Score:5, Interesting)

    by Beolach (518512) <{beolach} {at} {juno.com}> on Saturday May 29, 2004 @01:38AM (#9283526) Homepage Journal
    As I RTFA & previous comments here, I was rather suprised at how argumentive people were getting over this. Some people are saying swap is an absolute necessity & a swapless system was a broken system, while other's said swap was an obsolete solution to a problem that no longer exists (expensive RAM). This seems odd to me, because as far as I can tell, the decision of whether & how much swap to use is based mostly on two things: specific situations (and thus there is no general answer to 'Is Swap Necessary?'), and opinion. And either way, with the Linux kernel today (and for quite a while now), I can choose for myself whether or not, and how much, swap I want to use. So if I am in a situation that I think requires swap, I can use it, and in a situation that I think would be hurt by having swap, I don't have to use it. So I don't see why there's so much hoolabaloo about this: nobody is forcing anyone to do it one way or the other. And if someone else thinks it should be done different from how I would do it, that's their decision, not mine.
  • by wotevah (620758) on Saturday May 29, 2004 @01:40AM (#9283532) Journal

    Most applications today have unnecessary or rarely used portions of code or data - bloat. These get swapped out first. Also there are various memory leaks here and there, which means the programs sometimes forget to release allocated memory they do not need any longer.

    Look at the size of your X server, or mozilla, or apache, or pretty much anything else and you will see over the course of a few weeks that it has grown beyond reasonable operation demands.

    The memory lost this way is never accessed from there on, but the system cannot release it without the program telling it to, so it does the next best thing and shoves it in the swap. Not a solution since eventually swap gets full, but since the leaks are slow to begin with, at least it prevents them from affecting system performance too early.

    • by spitzak (4019)
      Unfortunatley that bloat is also *fragmented*. Even a 4-byte structure that is still in use buried in a page will keep it swapped in. In my experience the only way app pages get swapped out is when the app is idle.
  • Amiga (Score:5, Interesting)

    by Jace of Fuse! (72042) on Saturday May 29, 2004 @01:44AM (#9283566) Homepage
    In the 90's, I ran a 10 line BBS on an Amiga 4000 with 16 megs of Fast ram, 2 megs of Chip ram, and 0k for the swap file. :)

    I know, I know, the Amiga didn't HAVE virtual memory. Well actually it did if you had an 040 and installed a memory management program such as GigaMem, but so few people had a use for such a thing that it was practically unheard of.

    Oh, and before someone jumps in saying that I wasn't able to do anything else, that is totally NOT the case.

    Very often I was doing lots of stuff. The difference is developers were used to working within memory constraints, and now days developers are used to systems growing into the applications.
  • U R N Idiot (Score:5, Informative)

    by Graymalkin (13732) * on Saturday May 29, 2004 @01:49AM (#9283585)
    I haven't seen a case where disabling swap actually increases performance. I have however seen lots of cases where disregard for logic involving swap space caused serious performance problems. The old 1.5x and 2x rules for swap space are outdated and even dangerous in today's systems with ooglebytes of memory.

    With less than 128MB of RAM you practically need 2x your physical memory worth of swap space. Running a full GUI environment, even a relatively lightweight one, needs quite a bit of system memory. With 64MB of RAM and a 128MB of swap space you'll be able to run a light GUI environment but have a crappy filesystem cache. The system will crawl but it won't get constant OOM errors if you're not overzealous with your app usage.

    The 2x RAM rule on a system with 512MB of physical RAM on the otherhand is excessive. With 1GB of swap space most of it will end up empty unless you're running programs needing huge amounts of allocated memory. With more than 512MB or more of physical memory on a single user workstation you're pretty unlikely to run into situations where active pages are swapped out to disk.

    I've seen the runaway process situation crop up on more than one system with excessive amounts of swap space. Since swap is so slow it can be troublesome to kill a process that is using so much memory that it ends up having active pages swapped to disk. The system ends up spending 99% of its time trying to handle the disk IO from the heavy swapping which can make the system totally unresponsive for local and remote users. Because the systems had way more swap space than was logical the offending processes never got OOM errors even though they were using up almost all of the system's resources.

    I've pretty much set 256MB as the upper limit for my systems with 256MB or more of physical memory. That is enough swap space to hold any dirty pages or unused processes but not so much that a runaway process is going to eat up all my disk IO for a couple of hours. Once a system hits the 256MB threshold I toss out the silly 2x RAM rules for something with a little more cognitive thought.
  • "Swapiness" (Score:5, Informative)

    by Compholio (770966) on Saturday May 29, 2004 @01:50AM (#9283592)
    If you've got kernel 2.6 you can change the "swapiness" to fit your needs/desires. People with lots of RAM could experiment by changing the swapiness value to 0 and report back with the results (be easier than installing a system without swap).
  • by Festering Leper (456849) on Saturday May 29, 2004 @02:08AM (#9283662) Homepage
    there's a definite pattern with regard to swap in the windows world.

    for win'9x: use up ram until almost gone then start allocating swap space in anticipation of actually using it. should memory allocation still be increasing then actually use swap space. reverse the order when freeing memory.
    i had 384 megs ram at the time and as long as i used less than about 350 megs total the system wouldn't be in swap.

    for win 2k & xp: (when within physical ram limits) whatever amount of memory is requested, allocate between 60-80% to ram and the rest to the swapfile. even the disk cache partially goes to swap! i didn't believe it at first but all one has to do is look at the numbers in the task manager's memory/cpu window. at first i figured that all i'd need to do is throw in some more ram and the disk thrashing and absolute crawl would go away. i put in a gigabyte of ram (i never allocate more than 700 megs at most and the total system memory usage on bootup is 100 megs). even with the extra ram the problem stayed the same.

    turning off swap gives me consistent fast performance, and since the disk cache isn't swapped (partially) i get 2x the throughput i had with a swapfile on large file copy operations

    machine tested: duron 1.3ghz, 1 gig pc133 ram, 2x 80 gig wd800jb hdd.. os win2000 & winxp running newsbin which allocates disgusting amounts of ram in a large header grab (yeah i could have used a test program but why do that when newsbin is a real-world test for me). the os and applications are on different drives on their own ide chains

    with swapfile enabled (size=1.5x system ram).
    allocation time: unaffected, only the time to perform task reqested
    memory de-allocation time: (by either quitting app or selecting another group) 23 MINUTES of constant disk thrashing

    with swapfile DISabled
    allocation time: unaffected, only the time to perform task reqested
    memory de-allocation time: (by either quitting app or selecting another group) 2 seconds

  • by menscher (597856) <menscher+slashdot.uiuc@edu> on Saturday May 29, 2004 @02:30AM (#9283718) Homepage Journal
    Why not more? Because that's the largest a swap partition can be. Why not less? Because disk is cheap. It has little to do with the amount of ram in the machine either, because it's easy to add more ram, but a bit harder to repartition for more swap.

    Here's a real-life example of why swap is useful. One machine I manage has a gig of ram. At the time of purchase, that seemed quite reasonable. But the users are working on a project that takes 2 gig of ram. So currently it's using a gig of the swap. Yes, that's bad, and I'll be adding a second gig to it in a few days (it's in the mail). But in the mean-time, that swap space is really handy. It means the users can get their work done! Think of the first 256M of swap as being for speed. If you're regularly using more than that, then it's time to order more ram. But it's nice to have the spare gig of ram for odd jobs, or while you're waiting to install it.

    I'm no expert, but I think a lot of these arguments could be resolved if people took advantage of the ulimit constraints. If you can limit how much a program can get out of control, then there's no longer a concern for a single user sending the server into swap hell. One of my current projects is to figure out reasonable limits.

  • by harikiri (211017) on Saturday May 29, 2004 @02:51AM (#9283780)
    If I recall correctly, Welchia [symantec.com] (the worm) looked for target hosts by ICMP scanning. On several of our cisco routers, the increased traffic resulted in them running out of memory, to such a point where you could not log into them.

    Apparently a new feature (mentioned by a network engineer workmate), is to have the IOS reserve a portion of memory for administrative tasks (like supporting the login process and configuration shell).

    A feature like this, that "reserves" a portion of RAM so that if something really fubars your system, you can still login to fix it - would be great for Linux/BSD.

  • by erikharrison (633719) on Saturday May 29, 2004 @03:21AM (#9283839)
    For the kinds of complaints about Linux swap I've been seeing of late, it would be bogus to call swap the issue, really. People looking to eliminate swap entirely on desktop machines are cutting off the arm for the sake of a finger.

    The issue with swapping in a desktop system is that perception of system responsiveness is almost as important as real performance, and swapping in (actually, it's paging in, but that's semantics) causes high latecy. This is especially noticeable when returning to an idle machine. So we want to cut latency.

    People say "the kernel shouldn't swap unless it can't fit everything it needs in system memory." Duh! And it doesn't! It's swapping to increase the size of the file cache, a huge performance win. If the file cache gets too small (say, because this Wal-Mart PC only has 128 megs of RAM, and you've turned off swap, so Moz is eating it all) then you wind up with disk seeks for harddrive intensive applications, causing the same latency as swap.

    What's clear to me from these complaints is that the file cache isn't smart enough. People with lots of RAM want to cut down on all these disk reads - that's why they got gobs of RAM. (Ain't it funny that the same Linux heads who say that Linux makes a little machine fly also say that a desktop has no reason to have less than 512MB or 1GB of RAM). At the same time, smaller machines should still be supported, and even folks with gobs of RAM don't want to elimiate swap, otherwise disk bound apps suffer the same latency they're trying to eliminate.

    The Linux file cache seems too aggressive for most users. Ext2 loves a file cache like no other filesystem, and this probably influenced the design. If the file cache can be smarter about when to swap to grow itself, and when it should just be content to use up all available system memory, then lots of these latency issues can be fixed in a way which will scale across both hardware and multiple use environments.
  • by PhunkySchtuff (208108) <kai&automatica,com,au> on Saturday May 29, 2004 @04:43AM (#9284032) Homepage
    IMHO Swap is a good idea and here's why.
    I admin Solaris systems, and swap on Solaris is a fine thing indeed.
    You allocate a complete slice of a hard disk for swap, and you can then add and remove swap dynamically while the system is running. Need 1 GB more swap? Create an empty 1GB file, and add it as swap.
    What's more /tmp is mounted on swap. If you, say, have 1GB of swap space and chuck 512MB of stuff in /tmp, you've now got 512MB of swap left. Lots of Unix software dumps stuff in /tmp and, when there is available RAM, /tmp lives in RAM. This makes temp files very fast.
    Plus, the VM subsystem also deals with the file cache so on a Solaris system, you will see the amount of RAM used always around the 100% mark. No point in having RAM there unused, it costs too much. Use it as disk cache.
    In addition, when an application needs to be swapped out to disk, why bother writing to disk something that's already there - the application's code is marked as being paged out to disk and removed from RAM and when it's needed again, the code is fetched from the original binary that the application was loaded from.
    All in all, these kind of modifications to the VM subsystem mean that swap is good to have and can make systems faster with it than without.
    k:.
  • by majid (306017) on Saturday May 29, 2004 @05:20AM (#9284103) Homepage
    A swapless system won't be faster for the same workload, usually the contrary, in fact, since lack of swap denies the system the opportunity to optimize RAM hit ratios. What a swapless system can do is force admission control on new processes in the system, thus enforcing a no-overcommit policy on RAM, and therefore increasing responsiveness at the expense of global throughput.

    Swap thrashing in a desktop environment is usually the sign of a workload that is too high for available memory, e.g. trying to run far too many apps simultaneously. No amount of OS smarts is going to compensate for overbooking RAM with too large a working set. The solution is to increase RAM or not run as many apps simultaneously.

    Swap thrashing in a server environment is usually the sign of improper server configuration. Naive administrators configure too many processes, thinking they will avoid a bottleneck if all server processes are busy, but all they achieve is turning RAM into the bottleneck rather than the server processes themselves. If you have a web server and configure Apache to have too many running processes, these processes will spend their time contending for RAM instead of doing useful work. Too many cooks spoil the broth. A swapless system would prevent excessive Apache processes from starting in the first place, thus alleviating the problem (at the expense of high error rates, which is probably not acceptable), but performance won't be anywhere as good as a system with swap and properly sized Apache process limits.

    Swap is not a panacea. It should not be used to protect against runaway processes (setrlimit is here for that). It is useful in absorbing sporadic spikes in traffic without causing denial of service, and to shunt away uselessly allocated virtual memory (ahem, memory leaks).

    As for the idea of putting swap on a RAMdisk, it is completely brain-dead (unless you have exotic memory arrangements such as NUMA) - the kernel is going to waste a lot of time copying memory from the active region to the ramdisk region and back. A straight swapless system will be preferable.

    There is no hard and fast rule for sizing swap, it depends on your workload, such as the average ratio of RSS to SIZE. The usual rule of thumb is between 1x and 2x main memory.
    • A swapless system won't be faster for the same workload, usually the contrary, in fact, since lack of swap denies the system the opportunity to optimize RAM hit ratios.

      Agreed. This is the real reason to have swap space, so you can run more applications than you have resources for. It also allows running applications to push ones that are not doing much out of the way while they are stalled (waiting for a resource) or otherwise not running (a process like, say, a database server that is sleeping until i

  • Swap Partitions (Score:3, Interesting)

    by HeghmoH (13204) on Saturday May 29, 2004 @07:28AM (#9284324) Homepage Journal
    I haven't touched Linux for several years, although I used to do serious work on it.

    I take it from the tone of the discussion that Linux still uses separate swap partitions? Why? My main machine now runs OS X, which swaps into the filesystem, and that seems to work a lot better. The system can decide what it needs to use, and I don't have to make a decision. I recall that Linux supports swap to the filesystem, but it sounds like nobody actually uses this feature. I can somewhat understand a server using a swap partition, since the needs of a server would be more or less known in advance and I assume it's marginally faster, but I don't see any reason to use one on a desktop machine. Why is everybody still using dedicated swap partitions?
    • Re:Swap Partitions (Score:5, Informative)

      by Corydon76 (46817) on Saturday May 29, 2004 @09:08AM (#9284535) Homepage
      Because until relatively recently, Linux used a very slow filesystem by default: ext2fs. This is, for example, why ext2 filesystems are always mounted asynchronously, as attempting to wait for each disk operation to sync would slow down the system dramatically. You'd have to be concerned with swap not going to disk immediately (just moving into disk cache, waiting to be synced to disk, which can take up to 30 seconds!).

      Now with more advanced filesystems that can be mounted synchronously, using a swapfile is less of a problem -- it certainly could be done, but you still get a performance hit by having to manage a filesystem entry, rather than swapping to raw disk.

      BTW, a number of databases use raw disk for exactly this reason -- to avoid the performance hit of managing a filesystem. And yes, it will make a difference to the overall performance of your database.

  • nocache directive (Score:5, Interesting)

    by Stephen Samuel (106962) <samuel AT bcgreen DOT com> on Saturday May 29, 2004 @09:34AM (#9284600) Homepage Journal
    One of the errors that I see is Linux doesn't handle the read-once case very well.

    Once in a while I'll do something like 'grep -r "oops" /big/filetree'. The fact of the matter is that I'm probably only reading any of that data ONCE, and it's not going to all fit in memory anyways, so I don't even gain anything if I run the grep a second time.

    In a situation like that, I'd like to have some sort of 'nocache' directive that says 'Don't waste the cache with this'.

    Something else that might help would be to have some sort of 'minprog' directive which would tell the swapper that a certain amount of space is reserved for 'program' data (i.e. code (including shared libs) and data), -- and that that memory shouldn't be swapped out in favour of something otherwise being read from disk. I think that this might avoid the situation that I sometimes run into of a large program (mozilla/gimp) being unresponsive after I do some other disk-intensive task (like the aformentioned recursive grep).
    Things like the OS enforcing things like the RSS rlimit hints would also help. (I hadn't previously realized that it didn't).

  • My experience (Score:5, Interesting)

    by jmichaelg (148257) on Saturday May 29, 2004 @09:43AM (#9284632) Journal
    My first job as a sysadmin was on a Burroughs 7700. My employer sent me to a week long class on tuning the os to help the company deliver a turnkey app that met some performance specs. Didn't matter what I did to the working set/swap settings - the thing was pig slow. The older guys in the class who had admin experience on IBM 370's were constantly complaining that the Burroughs OS was doing a worse job deciding how to allocate RAM than they could and it was making them look bad because the boxes wouldn't deliver the throughput they had had with supposedly inferior IBM hardware. As you can imagine, it was a very contentious class.

    My boss started worrying that we weren't going to be able to deliver what the company had contracted to deliver. He was the antithesis of a PHB and so he sat down and in a few hours wrote a small driver to emulate the overall task the project had to accomplish. No detail, just broad brush emulation. He was able to demonstrate with a few lines of code that nothing we could do would hit the delivery spec. Burroughs responded by doubling the amount of RAM on the box as well as installing RAM that was twice as fast as what they had initially delivered. The combination enabled us to turn off swapping and deliver a working product.

    Fast forward to 2004 and I'm working on Excel spreadsheets that have 60-70 sheets in a workbook. Saving the book is a bitch - 15-20 second wait after I hit ctrl-S. Every so often, Excel just goes away as it performs a prophylactic background save just in case Excel dies. 15-20 second pauses because the software has become so bloated that saving a 2-3 meg document is an excuse to flog the poor drive into a seek frenzy. The drive, which was about 4 years old, finally gave up the ghost. Its replacement has an 8 meg cache separate from the 512meg Windows manages - that "little" 8 meg junk of RAM belongs to hard drive alone. Night and day performance difference. The Excel swap frenzies that were induced by a simple ctrl-s are gone. 3 meg documents save in under a second - just what you'd expect from a drive that has a transfer speed in excess of 60 mbytes/sec.

    My sense is that swap has always been a kludge. It's an attempt to squeeze more data into a machine that has only so much space. The working set graphs look pretty but they seldom describe what is happening day to day. Trading 2 nanosecond response for a 5 millisecond seek is seldom going to be a good trade. Bottom line from that OS class 35 years ago? Keep your working set size less than your physical memory and your machine will remain responsive. Just what the old IBM Geezers were saying in the first place.

What this country needs is a good five dollar plasma weapon.

Working...