Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Is Swap Necessary? 581

johnnyb writes "Kernel Trap has a great conversation on swap, whether it's necessary, why swapless systems might seem faster, and an overall discussion of swap issues in modern computing. This is often an issue for system administrators, and this is a great set of posts about the issue."
This discussion has been archived. No new comments can be posted.

Is Swap Necessary?

Comments Filter:
  • When I was running Linux on my 350 mHz Pentium II with 128MB RAM, you can dang well bet I wouldn't have made it without a swap partition. I probably would have gone back to Windows if swap hadn't existed.
  • by Anonymous Coward on Saturday May 29, 2004 @01:07AM (#9283391)
    All the docs on Linux and swap amounts to use are from the days of 386s and 4 megs of ram!

    I want to know how much swap I should REALLY be using for a system with 1 gig of ram.

    Same for some of the kernel compilation docs. Maybe on a 4 meg system compiling that extra option might cause slowness but on a 500 meg system does an extra 30k in the kernel matter?

    Can we get some docs that aren't from the mid 90s!
  • by NerveGas ( 168686 ) on Saturday May 29, 2004 @01:08AM (#9283394)

    People like to claim that swap can always improve performance, by swapping out unused sections of memory, allowing for more memory to throw at apps or disk cache.

    Well, *most* apps won't just arbitrarily consume memory, so endless amounts of memory won't help. And disk cache gets you greatly diminishing returns.

    One of the machines I use has 3 gigs of memory. It will swap out unused programs, in an attempt to free up more memory. The joke is that it simply can't use all three gigs. After half a year of uptime, there's still over half a gig completely unused, because the apps don't take memory, and there's not that much to put in disk cache.

    Obviously, that's a pathological case. And there are pathological cases at the other extreme. But as memory prices keep dropping over the long run, swap does become less and less useful.

    steve
  • by rd4tech ( 711615 ) * on Saturday May 29, 2004 @01:11AM (#9283407)
    Start running bunch of applications and seeing what happens with the memory and the swap. The swap hardly gets used at all if you have 1GB RAM. On the other hand, on my old 486 with 32MB of RAM, swap was the main thing.. Sometimes several hundred MBs
  • by rsmith-mac ( 639075 ) on Saturday May 29, 2004 @01:11AM (#9283410)
    As long as users can eat up more memory than they have available, and as long as hard drive space is cheaper than RAM space, swap will always be necessary.
  • by Coneasfast ( 690509 ) on Saturday May 29, 2004 @01:19AM (#9283438)
    isn't 640k enough for everyone?

    people constantly make this joke, but seriously, at the time BG said this, it was probably true.

    if today i say "1 gig ought to be enough for everyone" it is true, but in 10 years from now you will be laughing at this.

    he never claimed it would 'ALWAYS' be enough (unless there is more to this quote???)
  • by Doppler00 ( 534739 ) on Saturday May 29, 2004 @01:32AM (#9283488) Homepage Journal
    Does anyone out there want to run a series of benchmarks with a few standard applications to prove/disprove whether disabling swapping improves performance?

    I'm tired of just hearing antidotal evidence on this. Everyone has their stories about turning off swap files and improving performance, but in what cases? Are there some users this would harm?
  • by wotevah ( 620758 ) on Saturday May 29, 2004 @01:40AM (#9283532) Journal

    Most applications today have unnecessary or rarely used portions of code or data - bloat. These get swapped out first. Also there are various memory leaks here and there, which means the programs sometimes forget to release allocated memory they do not need any longer.

    Look at the size of your X server, or mozilla, or apache, or pretty much anything else and you will see over the course of a few weeks that it has grown beyond reasonable operation demands.

    The memory lost this way is never accessed from there on, but the system cannot release it without the program telling it to, so it does the next best thing and shoves it in the swap. Not a solution since eventually swap gets full, but since the leaks are slow to begin with, at least it prevents them from affecting system performance too early.

  • by Wog ( 58146 ) on Saturday May 29, 2004 @01:48AM (#9283583)
    I think what he's attempting to solve is the problem of some apps throwing a fit when they can't find a bunch of swap space, regardless of the 4 gigs of RAM installed...
  • by Trepalium ( 109107 ) on Saturday May 29, 2004 @01:57AM (#9283622)
    I think it's simply a case of, 'there's no simple answer'. Even a benchmark would be difficult to do, because it would vary depending on workload. You might be able to handle a particular machine with no swap, but I would find it unusable. I'm not even sure how you would test this. What kind of performance would you test? Latency? I/O throughput? Integer instructions per second? If you turned off cache, your maximum latency and insn per second might increase, but your throughput may decrease.
  • Re: IMHO (Score:3, Insightful)

    by Black Parrot ( 19622 ) on Saturday May 29, 2004 @02:02AM (#9283638)


    > Linux has two properties that make swap a good thing (TM).

    A third: Linux is a powerful and stable tool that makes it possible to run a dozen virtual desktops and stay logged on for a year at a time. So if you're a power user who leaves scores of applications open indefinitely as part of your ongoing work, kick some of them out to swap and leave them there until you get back on that project.

    I've added first one and then a second swap file, to quadruple the size of the swap partition I made when I installed my current system. My next system will have much more memory, but much more swap space as well. I'll just leave more and bigger programs open on more virtual desktops, and run less risk of The GIMP blowing up when I run a complex fu script on a big image.

  • by torinth ( 216077 ) on Saturday May 29, 2004 @02:08AM (#9283657) Homepage
    What about the basic situation of not setting a hard-to-describe limit on desktop users? Managing and disabling swap is great in controlled environments like servers and embedded systems, where the applications being run are limited and pre-determined.

    But on desktop systems, a user may want to use Word, Photoshop, Outlook, Internet Explorer, an anti-virus tool, 30 other system tray tasks and services, etc. Should this user sit there and add up the recommended RAM of each of every application she owns and use that as a guideline for buying? That seems a little over-complicated and wasteful. Most of the time, she won't be running every application, but she really should be able to when she wants to.

    The solution is to introduce a cheap storage tool to extend what's treated (by applications) as RAM--swap.
  • Swap sucks. :) (Score:5, Insightful)

    by MikeFM ( 12491 ) on Saturday May 29, 2004 @02:08AM (#9283660) Homepage Journal
    I've built many servers, embedded systems, and even desktop systems that don't use any swap at all. Many more I limit the amount of swap greatly. The overall responsiveness is much better if you don't use swap and I find system stability to be better. Really it doesn't matter what the systems are used for or how many apps are being ran.. it's just how much memory you're going to use compared to the amount of physical memory you can afford. You can run out of memory just as easily using swap as you can while limited to physical memory.. the main difference being that the recovery of the sitution is much worse in the case of using swap. Quite often the system starts to churn and then grinds to a halt. Without swap those tasks just die and everything else keeps running. Setting memory limits on tasks is a good way of ensuring which tasks are killed first but I'd like to see better control of this given to the admin.
  • by EvanED ( 569694 ) <{evaned} {at} {gmail.com}> on Saturday May 29, 2004 @02:35AM (#9283732)
    I doubt it was even that much a lack of vision. You *have* to make a limit somewhere, you can't make, for instance, addresses an infinite number of bits. Now, maybe they were too shortsighted in picking 640K (which actually doesn't make sense to me as it's not a power of 2, but I guess I don't know enough about the reasoning), that's something to pick. But eventually, yes, people would turn around and laugh at it.
  • by ananke ( 8417 ) on Saturday May 29, 2004 @02:45AM (#9283760)
    "Why not more? Because that's the largest a swap partition can be"

    Just a side note: you can have multiple swap partitions. [not that you need them, but you can have them].
  • by harikiri ( 211017 ) on Saturday May 29, 2004 @02:51AM (#9283780)
    If I recall correctly, Welchia [symantec.com] (the worm) looked for target hosts by ICMP scanning. On several of our cisco routers, the increased traffic resulted in them running out of memory, to such a point where you could not log into them.

    Apparently a new feature (mentioned by a network engineer workmate), is to have the IOS reserve a portion of memory for administrative tasks (like supporting the login process and configuration shell).

    A feature like this, that "reserves" a portion of RAM so that if something really fubars your system, you can still login to fix it - would be great for Linux/BSD.

  • by Anonymous Coward on Saturday May 29, 2004 @02:53AM (#9283783)
    Just in case you need it.

    Although, I don't know what the big deal is. My OpenBSD server, which has 2 gig swap and 1 gig ram, hasn't actually USED any swap for more than 2 months. The server is used for email & an intranet site, with about 50 concurrent users.

    Of course, OpenBSD is dying, so what do I know...
  • by Anonymous Coward on Saturday May 29, 2004 @03:01AM (#9283797)
    If your system is being used as a desktop, and responsiveness isn't an issue, and you have enough memory for everything you need/want to do -- great. Don't worry about swap. However... for servers, in particular, running without swap is not a good idea. Several reasons:
    1. The system can swap out unused portions of memory (that have been allocated, written to once, and not touched in a long time), and use that memory as a disk cache. Depending on how often those "unused" portions are needed, this can be a big win.
    2. As somebody else pointed out, if a process goes haywire and allocates far too much RAM, swap gives you a bit more breathing space before it becomes a problem.
    3. Final point. Under Solaris, you can configure the kernel so that, if it panics, it dumps the entire contents of RAM to the swap partition. On the next bootup, this memory dump is read, and put into a real file on a real filesystem. This can help track down the cause of problems. But for this to work, the swap partition must be at least as large as the amount of physical RAM you have.
    Why write the contents of RAM to swap? Well, where else can it go? The kernel's just had a panic. You can't trust any significant part of the code (eg: filesystem drivers). You do, however, know where the swap partition is, and it's safe to scribble all over it; none of the apps that were running are going to continue anyway.

    It's all about what you want to do with the system, and making a judgement call on this. Me? I say, disk is cheap; why not have a swap partition?

  • by arvindn ( 542080 ) on Saturday May 29, 2004 @03:11AM (#9283819) Homepage Journal
    Unfortunately, that's very difficult, perhaps impossible.

    The users who are complaining about swap are saying that it decreases desktop responsiveness. Responsiveness is different from performance, and is frequently antithetical to it. It is inherently subjective and therefore hard to quantify.

  • by erikharrison ( 633719 ) on Saturday May 29, 2004 @03:21AM (#9283839)
    For the kinds of complaints about Linux swap I've been seeing of late, it would be bogus to call swap the issue, really. People looking to eliminate swap entirely on desktop machines are cutting off the arm for the sake of a finger.

    The issue with swapping in a desktop system is that perception of system responsiveness is almost as important as real performance, and swapping in (actually, it's paging in, but that's semantics) causes high latecy. This is especially noticeable when returning to an idle machine. So we want to cut latency.

    People say "the kernel shouldn't swap unless it can't fit everything it needs in system memory." Duh! And it doesn't! It's swapping to increase the size of the file cache, a huge performance win. If the file cache gets too small (say, because this Wal-Mart PC only has 128 megs of RAM, and you've turned off swap, so Moz is eating it all) then you wind up with disk seeks for harddrive intensive applications, causing the same latency as swap.

    What's clear to me from these complaints is that the file cache isn't smart enough. People with lots of RAM want to cut down on all these disk reads - that's why they got gobs of RAM. (Ain't it funny that the same Linux heads who say that Linux makes a little machine fly also say that a desktop has no reason to have less than 512MB or 1GB of RAM). At the same time, smaller machines should still be supported, and even folks with gobs of RAM don't want to elimiate swap, otherwise disk bound apps suffer the same latency they're trying to eliminate.

    The Linux file cache seems too aggressive for most users. Ext2 loves a file cache like no other filesystem, and this probably influenced the design. If the file cache can be smarter about when to swap to grow itself, and when it should just be content to use up all available system memory, then lots of these latency issues can be fixed in a way which will scale across both hardware and multiple use environments.
  • by kasperd ( 592156 ) on Saturday May 29, 2004 @03:34AM (#9283866) Homepage Journal
    Alternatively, under what kind of running application mix is it true that reserving Y amount for swap yields a better memory management algorithm than using X+Y fully?

    I don't think you will find an application where that is the case. But maybe if the Y amount you allocate for swap happened to be slower than the rest of the RAM, it could improve performance. If all of your RAM has the same speed, the VM really have to be f***ed up to not give you better performance when it can manage all of your RAM.
  • by cpghost ( 719344 ) on Saturday May 29, 2004 @03:39AM (#9283880) Homepage

    Please don't touch the sticky bit semantics. They are still used on other Unix-like systems (though rarely) and having different meaning in Linux is just calling for trouble.

    A better way would be to use other file attributes. On FreeBSD you can use chflags(1) to set flags like arch, opaque, nodump, sappnd, schg, sunlnk, uappnd, uchg, uunlnk. It is IMHO always better to add more flags in a specific filesystem implementation, than to break backward compatibility without very good reason.

  • Re:IMHO (Score:3, Insightful)

    by PacoTaco ( 577292 ) on Saturday May 29, 2004 @03:45AM (#9283891)
    Finally, every Linux user that has compiled a kernel knows that it can really tax a system. Gentoo users also know how strenuous a XFree86 or KDE/Gnome compile can be.

    It shouldn't be, unless you have a low memory system and everything (including your swap) is on an older IDE disk that doesn't seek quickly. I often leave large builds running in the background on Windows, BSD and Linux systems with no noticeable impact on system responsiveness.

  • by Eskarel ( 565631 ) on Saturday May 29, 2004 @04:30AM (#9284004)
    Well that's really not a totally fair comparison. True, you may use far more resource intensive apps on your linux machine, but unless you're running some varation of wine(and even to a certain extent then) it's not likely many of those apps are games.

    A heavily used server is really not comparable to a game even if it seems like it uses more resources simply because it's far more likely to be well written than your average game. I've seen some seriously nasty memory leaks in popular games that I'd never see in something which was better designed be it for Windows or Linux.

    If you were comparing running the same app on windows vs linux then perhaps you could criticize the memory manager(which is honestly probably not as good), but you're not.

  • Re:IMHO (Score:1, Insightful)

    by Anonymous Coward on Saturday May 29, 2004 @04:32AM (#9284008)
    I agree that systems with constraints on RAM need to have space for disk caching, etc... but there is a problem with how this is all implemented on the two dominant OS options (Wxxx and Linux). Both systems are tuned to assume RAM is inadequate, even though this is no longer common in newer systems. For example, on my primary dev system the OS swaps out 30M of data to swap, then allocates 750M for disk cache, and a quick run through the cache buffers with a debugging tool shows that the entire swap file is in the disk cache, along with 400M of unused disk cache, and about 300M of other data, 95% of which is marked stale (meaning the system has to reread it on next access anyway). This leaves me in the strange position of having the OS swapping much of itself out to slow disk, then using RAM to cache the entire swap file to improve performance. I don't have any choice in the matter. Swap is fine, but only if we can put something resembling an intelligent algorithm in there to ensure that swap is only used if, and when, it's really needed.
  • by majid ( 306017 ) on Saturday May 29, 2004 @05:20AM (#9284103) Homepage
    A swapless system won't be faster for the same workload, usually the contrary, in fact, since lack of swap denies the system the opportunity to optimize RAM hit ratios. What a swapless system can do is force admission control on new processes in the system, thus enforcing a no-overcommit policy on RAM, and therefore increasing responsiveness at the expense of global throughput.

    Swap thrashing in a desktop environment is usually the sign of a workload that is too high for available memory, e.g. trying to run far too many apps simultaneously. No amount of OS smarts is going to compensate for overbooking RAM with too large a working set. The solution is to increase RAM or not run as many apps simultaneously.

    Swap thrashing in a server environment is usually the sign of improper server configuration. Naive administrators configure too many processes, thinking they will avoid a bottleneck if all server processes are busy, but all they achieve is turning RAM into the bottleneck rather than the server processes themselves. If you have a web server and configure Apache to have too many running processes, these processes will spend their time contending for RAM instead of doing useful work. Too many cooks spoil the broth. A swapless system would prevent excessive Apache processes from starting in the first place, thus alleviating the problem (at the expense of high error rates, which is probably not acceptable), but performance won't be anywhere as good as a system with swap and properly sized Apache process limits.

    Swap is not a panacea. It should not be used to protect against runaway processes (setrlimit is here for that). It is useful in absorbing sporadic spikes in traffic without causing denial of service, and to shunt away uselessly allocated virtual memory (ahem, memory leaks).

    As for the idea of putting swap on a RAMdisk, it is completely brain-dead (unless you have exotic memory arrangements such as NUMA) - the kernel is going to waste a lot of time copying memory from the active region to the ramdisk region and back. A straight swapless system will be preferable.

    There is no hard and fast rule for sizing swap, it depends on your workload, such as the average ratio of RSS to SIZE. The usual rule of thumb is between 1x and 2x main memory.
  • by hobo2k ( 626482 ) on Saturday May 29, 2004 @05:26AM (#9284111) Journal
    I had to read that a couple times before I noticed the problem. You have the second theorem wrong. It should say: "X ram + Y swap is slower than (X + Y) ram with NO swap at all". Then, your question about managing X+Y ram wouldn't make sense because there is nothing to manage: either you run out of memory and apps die or you don't.

    Intelligent memory managment only affects performance if you have swap space. Swap space could be defined as storage which is slower than main memory. If all your storage is the same speed, memory management is trivial.

    Ironically, I have the same challenge for theorem #1 that you used for #2. #1 states that having swap is better than not having it. Clearly having swap increases the amount of allocations programs can make before their malloc's fail. But improve performance? That is only true if the OS can predict what data is needed for the future operations. If it predicts wrong, the usage of swap can degrade performance.

  • Re:Swap sucks. :) (Score:3, Insightful)

    by oolon ( 43347 ) on Saturday May 29, 2004 @07:58AM (#9284379)
    There are good reasons for swap, for example when a program forks, You need spare ram for the complete process space, this space normally comes from swap, before being wiped out when a new command is execed. Another good thing to do with swap space is use tmpfs and use it from /tmp, that way if you have lots of memory /tmp will come from memory not disk, and if your stuck for space your use the swap space.

    James
  • by swilver ( 617741 ) on Saturday May 29, 2004 @08:08AM (#9284398)
    The default kernel behaviour is WRONG. The whole idea of memory is to put in stuff that will be likely to be accessed again. How likely is it that you will be watching a 1 GB movie again?

    Of course, the Kernel will have no idea about watching movies, but it stil can distinguish this "unimportant" data from data that do needs to be cached. The most important way to distinguish this data from data that does need caching is how fast it is needed in the first place.

    When I do a grep on the kernel tree, you'll find that your harddisk speed is bottle neck; it is worth caching this data as grepping from memory would enhance its performance.

    When I play a movie, the harddisk is not the bottleneck, in fact, NOTHING is a bottle neck, as my movie would be stuttering and unwatchable otherwise. This data is not worth caching (atleast not worth caching so much of it to the point of swapping out all else).

    This goes for most media streams, but also for interaction with the internet (downloads/uploads/p2p). There's no need to keep a 1 GB file cached when that file has "accumulated" cache space in the course of an hour or more (ie, slow I/O); if it was important enough to warrant caching, I'd think the harddisk would have been the bottleneck in the first place...

    --Swilver

  • by Proud like a god ( 656928 ) on Saturday May 29, 2004 @08:37AM (#9284455) Homepage
    Surely if your system runs out of RAM it shouldn't die? The runaway process, sure, but the OS should be able to reclaim some RAM from that and manage to carry on, no?
  • Re:IMHO (Score:3, Insightful)

    by maraist ( 68387 ) * <michael.maraistN ... m ['AMg' in gap]> on Saturday May 29, 2004 @08:39AM (#9284461) Homepage
    I can't confirm the details, but it was my understanding that the Alan Cox fork in RedHat 9 had an implementation where the swap had to be at LEAST as big as main memory.. Theoretically the reason is that you preform pre-swapping.. The idea is that if you waited until the last second to do any swapping, then your most efficient choice would be to swap non-dirty pages.. but if instead during idle periods of IO you swap dirty pages, then when it comes time to swap, just about everything is fair game, and you can truely swap out the LRU pages with great efficiency.

    Thus you'd pre-swap once you got to like 60% full memory.

    More-over, as others' have said, unless you have as much RAM as the average hard-disk space used / day, then you are in a non-optimal operating environment, since your cache isn't as big as it should be.. cache flushing swapping.. They're almost identical in user-time experience (though aruably, re-reading a contiugous chunk of file-data is going to be faster than unswapping randomly positioned data; but how many files are contiguous these days?).

    Thus if you have a daemon with a LOT of setup code which is never used again after startup, then it makes sense to permanently swap this out to disk to free up space for the cache.

  • by grmoc ( 57943 ) on Saturday May 29, 2004 @04:34PM (#9286333)
    When you're DMAing large amounts of memory, memory fragmentation becomes an issue.

    This is why the 'bigPhysArea' patch to the kernel exists-- to create continuous bloakcs of memory which can be transferred without having to scatter/gather.

    Note that this is independant of memory -usage-.. this is an issue with the 'block size' if you will, of memory segmentation.
  • by Alien Being ( 18488 ) on Saturday May 29, 2004 @05:27PM (#9286616)
    The parent, was talking about processes "munching RAM", not VM.

    On a system with no swap, all of VM would be exhausted very quickly by a runaway, at which point the behavior you're describing would kick in. But on a system *with* swap, IO waits act like a brake. In some cases it gives the admin time to stop the runaway train before it hits the wall (no more VM).

I've noticed several design suggestions in your code.

Working...