Is Swap Necessary? 581
johnnyb writes "Kernel Trap has a great conversation on swap, whether it's necessary, why swapless systems might seem faster, and an overall discussion of swap issues in modern computing. This is often an issue for system administrators, and this is a great set of posts about the issue."
IMHO (Score:3, Interesting)
I really doubt that majority of newest desktop PCs need to swap on the HD at all.
The unused/used portions argument from the article isn't quite true. You don't have to swap every unused bit,
if you have enough RAM, leave everything there. It's R-A-M. don't access parts you don't need.
If you don't have them in the RAM, read them from the drive,
don't waste time putting them where they mostly are in the first place.
I'm willing to bet that people who need performance, don't often run 10 applications at the same time. If they do, they
surely know what are they doing.
IMHO the average user should get enough RAM and no swap, let the OS optimize things a bit.
no swap? (Score:4, Interesting)
Re:Swap is definitely necessary (Score:2, Interesting)
I don't use linux, so I can't say how well it'd work on my machine without swap, but I can't imagine it'd be any worse.
* For the Windows-ignorant: a pagefile is the Windows equivelent of swap.
Swap can save your ass (Score:5, Interesting)
You've some good points... (Score:5, Interesting)
Already, there are systems that minimize that need, set-top boxes, embedded systems in general. But each of those is seriously modified (kernel-wise, mostly) to achieve the responsiveness, the frugality of resource treatment that a general purpose desktop computer can't expect to enjoy.
That doesn't mean that developers should stay in the same rut, assuming that hardware that confined system design in the '60s, '70s... '00s will perpetually assign similar constraints.
IMO, desktops still need to swap... for now. but let's not paint ourselves into a performance corner.
Re:IMHO (Score:5, Interesting)
Maybe you have enough memory to run your program, but you don't have enough memory to keep enough directory structures into RAM, so you keep needing to read the disk. If there are unused pages in that program that were only used once during startup, for example, it makes sense to get them out or memory, so that memory can be used for disk caching instead.
Now, you have to understand how Linux handles paging, too. Unmodified pages from executables that are running may be discarded by the kernel at any time, because it knows where to get them. They won't be thrown into swap because it's not necessary. On the other hand, if that particular page has been modified (and some are modified as they are loaded by ld.so, for example), then the page must be copied into swap before it's discarded.
Try this with linux (Score:5, Interesting)
I just don't get it. (Score:3, Interesting)
Specifically, suppose I have one computer with 1GB of RAM and 1GB of swap, and another computer with 2GB of RAM and no swap. Under what circumstances will the first computer be any faster?
Now I suppose if the swap is used for other things besides memory space then I could understand it. But then it seems like a simple solution would be to allocate a fraction of RAM for those things. In effect, create a swap partition on a RAM disk
Seriously, I'd appreciate some education here, but make sure you answer my specific scenario above if you reply... thanks
I'm curious how windows does it (Score:3, Interesting)
Running KDE 3.2.1 now, I notice it takes longer to open apps than it does in windows. Mozilla for example takes literally a few seconds longer to open each window than it did in windows. Another thing windows does is make it faster when you run an app right after you ran it then closed it. Say for example in windows I run mozilla, then close it, then open it. When it opens it the second time, it's almost instant. However in linux, it seems to take the same original amount of time to load it completely. I'm sure it has to do with an entirely different process of loading programs, but apps always seemed to open faster in windows than in linux, in my view.
Then again, graphics used to be in the NT kernel and that's what made it appear fast, but lead to a lot of problems and crashes, so maybe the longer load time is worth the wait when compared to a reboot.
Re:On a laptop... yes, for the wrong reason (Score:3, Interesting)
Re:You've some good points... (Score:5, Interesting)
However, when I ran a program, the amount of used ram DROPPED.
Of course in an environment where the system gets hammered, it's all very well talking about how cheap ram, but so is hard disk space. Is it really worth not setting up a bunch of swap space? What if a rogue process munches it's way through the ram while you're away? Would it not be better to have swap space and have it so the system can run, albeit not very well, than just die on you?
I don't know, I ain't a sys admin, but performance issues aside, I don't see why you should risk it. I'd rather have swap partitions on a hardcore system than not.
It's a choice... (Score:5, Interesting)
Re:If You have enough RAM (Score:1, Interesting)
"well it is a magical property of swap space, because extra RAM doesn't allow you to replace unused memory with often used memory. The theory holds true no matter how much RAM you have. Swap can improve performance. It can be trivially demonstrated."
Wouldn't having a swap drive in ram improve the overall performance of having a swap drive and still keep the above true?
Amiga (Score:5, Interesting)
I know, I know, the Amiga didn't HAVE virtual memory. Well actually it did if you had an 040 and installed a memory management program such as GigaMem, but so few people had a use for such a thing that it was practically unheard of.
Oh, and before someone jumps in saying that I wasn't able to do anything else, that is totally NOT the case.
Very often I was doing lots of stuff. The difference is developers were used to working within memory constraints, and now days developers are used to systems growing into the applications.
Re:swap rule! (Score:5, Interesting)
Instead, the swap needed depends on the sort of usage pattern your machine has. If it's a desktop with 1-3GB of RAM, a swap partition of 1GB is completely adequate. Want the machine to swap as little as possible and utilize all the RAM, so turn down swappiness a bit to avoid Mozilla/Firefox from being paged out when you leave for 15 minutes.
On a server you need a whole lot more swap, the more the better. Not because it's necessarily any faster, it might be slower in fact with a high swappiness setting the system decides you don't really need that 2GB DB in memory if it's been unused for a month. But when you do run out of memory in legitimate use, the shit will really hit the fan if there isn't enough swap to pick up the slack.
Re:swap rule! (Score:5, Interesting)
Personally I use a value of around 20 or less for desktop machines. This keeps Mozilla being paged out after a short while, that really shouldn't be happening on modern hardware. Too bad you can't achieve the same effect in Windows 2000. Some people swear that a swappiness of 0 is ideal for their desktops, your mileage may vary. It's fun to play with in any case, any changes you make take effect instantaneously.
swap sucks with 2k & xp - disable it if possib (Score:3, Interesting)
for win'9x: use up ram until almost gone then start allocating swap space in anticipation of actually using it. should memory allocation still be increasing then actually use swap space. reverse the order when freeing memory.
i had 384 megs ram at the time and as long as i used less than about 350 megs total the system wouldn't be in swap.
for win 2k & xp: (when within physical ram limits) whatever amount of memory is requested, allocate between 60-80% to ram and the rest to the swapfile. even the disk cache partially goes to swap! i didn't believe it at first but all one has to do is look at the numbers in the task manager's memory/cpu window. at first i figured that all i'd need to do is throw in some more ram and the disk thrashing and absolute crawl would go away. i put in a gigabyte of ram (i never allocate more than 700 megs at most and the total system memory usage on bootup is 100 megs). even with the extra ram the problem stayed the same.
turning off swap gives me consistent fast performance, and since the disk cache isn't swapped (partially) i get 2x the throughput i had with a swapfile on large file copy operations
machine tested: duron 1.3ghz, 1 gig pc133 ram, 2x 80 gig wd800jb hdd.. os win2000 & winxp running newsbin which allocates disgusting amounts of ram in a large header grab (yeah i could have used a test program but why do that when newsbin is a real-world test for me). the os and applications are on different drives on their own ide chains
with swapfile enabled (size=1.5x system ram).
allocation time: unaffected, only the time to perform task reqested
memory de-allocation time: (by either quitting app or selecting another group) 23 MINUTES of constant disk thrashing
with swapfile DISabled
allocation time: unaffected, only the time to perform task reqested
memory de-allocation time: (by either quitting app or selecting another group) 2 seconds
Re:If You have enough RAM (Score:5, Interesting)
There are two "theorems" quoted: The first says that no matter what, if you have a size X of RAM used by the OS, and you add a size Y swap disk, you get better OS performance than if you only had X RAM.
The second "theorem" says: if you have X RAM + Y swap disk, then add Y RAM and use that instead as the swap disk, then you get *faster* performance.
The naysaysers now say that the second statement is misleading. Why? Because with X+Y RAM and Z swap disk, you'd get better performance again.
I think this betrays an underlying assumption which I'm not sure is true, namely: X+Y RAM managed by the OS any way it likes is always better managed then X RAM managed by the OS any way it likes and Y RAM reserved for swap operations.
In fact, let us suppose that the OS memory management is not optimal, ie when the OS manages X+Y amount of RAM, it does so suboptimally. Then it is possible that a different memory management scheme, e.g. X RAM used normally + Y RAM used exclusively for swap, may turn out to better use the available total RAM.
So the theoretical question is this: is Linux's memory management sufficiently optimal that with an ordinary set of applications running, it can always make better use of X+Y amount of RAM than if it always reserved Y for swap? Alternatively, under what kind of running application mix is it true that reserving Y amount for swap yields a better memory management algorithm than using X+Y fully?
why not have swap? (Score:1, Interesting)
There's also the use that if for some reason a system panics (hey, it happens) you have a place for the kernel to dump to. This can be valuable in helping debug what happened with a backtrace.
I always have 2G swap (Score:5, Interesting)
Here's a real-life example of why swap is useful. One machine I manage has a gig of ram. At the time of purchase, that seemed quite reasonable. But the users are working on a project that takes 2 gig of ram. So currently it's using a gig of the swap. Yes, that's bad, and I'll be adding a second gig to it in a few days (it's in the mail). But in the mean-time, that swap space is really handy. It means the users can get their work done! Think of the first 256M of swap as being for speed. If you're regularly using more than that, then it's time to order more ram. But it's nice to have the spare gig of ram for odd jobs, or while you're waiting to install it.
I'm no expert, but I think a lot of these arguments could be resolved if people took advantage of the ulimit constraints. If you can limit how much a program can get out of control, then there's no longer a concern for a single user sending the server into swap hell. One of my current projects is to figure out reasonable limits.
Re: I'm curious how windows does it (Score:5, Interesting)
Just FYI (Score:2, Interesting)
Simply put, you need enough 'memory' to hold all the stuff you want to run, plus caches. For a given task, you might go for a system with 512MB RAM and a 512MB swap, and I'll just go for 1GB RAM and forget the swap. The only difference is that if/when your system comes up on its RAM limit, it's going to start slowing down. When it starts using a lot of swap, it's going to crawl. But it'll still run. Until you run out of both.
Mine will run like blazes upto the 1024MB limit, then barf. No warnings like with swap.
So if you want an early warning sign, use swap. If your needs are well known and won't push beyond the limits of your hardware, don't bother.
You can always add a swap file later/only the fly as your needs change anyway.
dd if=/dev/zero of=/data/swap bs=1M count=512
mkswap
swapon
One of the tricks.. (Score:2, Interesting)
Re:If You have enough RAM (Score:3, Interesting)
Re:Where many people miss the point... (Score:3, Interesting)
Actually this sounds likely, but is it a good idea? Alternatively it could do a memcpy of your data from low memory to high memory. So now you have the choice between occupying the CPU to perform the memcpy, or occupying the disk controller to swap it out. But data that you could swap out is process memory, which you'd expect to be allocated from the high memory. So how do you actually reach a situation where process memory pages end up in low memory? You'd have to fill up the high memory first. Of course if you run a program that requires a lot of memory, which is all allocated from high memory, then other programs might get low memory. When the first program terminates you could have a lot of free high memory and other programs still taking up the low memory.
Re:Swap space not needed.... (Score:2, Interesting)
That's part of the reason why the 4GB addressing limit matters, and really x86 is hurt badly performance wise if you have more than a GB or 2, even below the physical 4GB limit (which can be extended via Intel's extensions (this limit doesn't exist in native AMD64, or Intel's semi-copy of that.)): now x86 relies on paging & virtual memory spaces, with upper addresses reserved for libraries, & kernel calls. This mapping may take up a fair amount of space, and when manipulating large data sets (very large images, Databases & other stuff)... this becomes problematic, because of the virtual 4GB limit. The physical limit may not have been reached, but the virtual limit is. Doesn't mean more RAM isn't faster, but it does mean that there is a speed hit in some cases.
If Bill Gates said it (he has denied it, but it's been around a LONG time), it may have even been something resigned possibly preferenced with an "Oh well, " or something like that.
Re:No more swap! (Score:3, Interesting)
You are absolutely correct that garbage collectors play hell with swap. It's pretty easy to understand why: to determine what is garbage and what is not, the garbage collector has to check every live object and see what they hold pointers to.
Think about that one for a sec -- the garbage collector has to look at every single live object on the heap during every garbage collection pass. This means that any pages which were swapped out have to be fetched from disk, so you end up (usually) loading the entire heap back into memory during garbage collection.
The aforementioned problem is true of all accurate garbage collectors. The other problem depends on the exact sort of garbage collector, but in general live objects are moved around in order to clean up holes in the heap (think of it like compacting a database). This can give you another "scan the entire heap" situation.
The only real exception to this rule is that large data structures (such as the pixel data for an images) that do not contain pointers and thus do not have to be examined can remain swapped out if they aren't relocated during a particular garbage collection pass. The first page of the data structure must always be loaded no matter what, hence the "large" (really, multi-paged) disclaimer.
An OS based on a GCed language such as Java will probably have to come up with some really innovative tricks for managing swap, or just do without.
Re:I'm curious how windows does it (Score:2, Interesting)
It watches applications startup and monitors what they read from the disk - it notes this in a log. During idle time it moves the sectors around on disk so that they will all be in the same place for the next time you start that app. When you start the app later it runs out and reads everything that it believes the app will want to read all at once. This pre-reading and disk order optimization makes XP start apps a heck of a lot quicker than previous versions of the OS did.
It also does exactly the same thing for the boot process. There's even a tool you can download from MS's web site that will allow you to force the system to clear what it thinks about the boot process, reboot, and force the ordering to take place immediately rather than during idle time.
David
Re:Try this with linux (Score:3, Interesting)
fine for you being a typical home user not doing much with your PC.
now with me editing 4GB video clips, rendering a 2Gb CG clip or trying to process a large rotoscoping project in film-gimp 1GB of ram is consumed 3 minutes after I sit down at that machine.
I have 2GB of ram + 4GB of swap and I can easily fill it all up using wither Blender, Film-Gimp or any of the other tools I use.
and I'm betting that many others here that actually use their computers for real work instead of what they typical home user uses it for will also chime in.
Basically, when you hit the top of your ram... hell will break loose on your machine... I can't afford to lose my work when I run out of ram, so I use swap to get more done.
Re:Where many people miss the point... (Score:5, Interesting)
Re:IMHO (Score:3, Interesting)
Taking out the swap in that machine and the system ran fine. Even running Half-Life: Counter-Strike via WineX by transgaming.
Do try that with Far Cry, I'm curious whether you'll notice a difference there. That recommends 1GB RAM, and swapping unused memory is certainly considered.
Personally, I think games are the one reason why swap is still very useful. You either run your programs on your desktop, or a game - not both. Getting enough RAM to hold everything is wasted money.
Reasons for swap... (Score:5, Interesting)
I don't know if Linux works this way, but...
UNIX kernels have assumed the availability of swap for nearly 35 years. You cannot remove this major architecutural feature without unintended side effects.
Re:If You have enough RAM (Score:3, Interesting)
Another possibility, that has been suggested for 2.7 is defragmentation of memory. Of course just because it has been suggested doesn't mean it is going to happen. Without defragmentation, what are your options to satisfy a larger allocation in case of fragmented memory? You'd have to free some memory, either by reducing the disk cache size, swapping out anonymous pages, or find some slabs that can be freed. But notice that with more memory there would be more possible choices for what to free, so it would have a better chance of picking something you won't need in the near future. If you used some ram for a ramdisk for swap, then you should not expect to use all of that ram. So effectively you are using less of your memory, which again could mean smaller chance of finding what you need in RAM. The failing allocation might as well be satisfied by removing a page from the page cache, which is certainly less disirable.
Of the given suggestions defragmentation of memory is probably the only that shouldn't cause performance problems in specific corner cases. (Me thinking back on the good ol' days with MS DOS and AmigaOS where the only solution to memory fragmentation was a reboot).
Re:Swap thrashing is a symptom, not a cause (Score:3, Interesting)
Under normal circumstances it would make sense that having all available memory would make more resources available than stealing some to make a virtual memory swap space, but as most operating sytems are designed to swap pages out as they are unneeded or when processes start to hit the high-water mark, the overhead of the swap manager running and being unable to do anything due to no swap at all just might be higher than the small amount of time needed to do some unnecessary copying of memory to swap out some rarely-used pages.
Short of someone running a test on a machine with no swap at all vs. say a tiny amount of ram used as a ramdisk (say 5 meg on a 1 GB machine) it's probably an academic argument to say flat out that no swap will always provide better peformance than swap to ramdisk, especially if the kernel is designed to expect to be able to have swap around.
If the kernel is designed to only swap out on resource shortages and not to optimize running processes as well, then swap to Ramdisk is a brain-dead operation. But I suspect the actual method of operation is a little more complicated than mere copy-on-resource-shortage, and thus it is conceivable that swap-to-ramdisk may provide better performance than no swap at all.
Paul Robinson <Postmaster@paul.washington.dc.us [mailto]>Swap Partitions (Score:3, Interesting)
I take it from the tone of the discussion that Linux still uses separate swap partitions? Why? My main machine now runs OS X, which swaps into the filesystem, and that seems to work a lot better. The system can decide what it needs to use, and I don't have to make a decision. I recall that Linux supports swap to the filesystem, but it sounds like nobody actually uses this feature. I can somewhat understand a server using a swap partition, since the needs of a server would be more or less known in advance and I assume it's marginally faster, but I don't see any reason to use one on a desktop machine. Why is everybody still using dedicated swap partitions?
Slow I/O should not be cached indefinitely (Score:2, Interesting)
Both OS's have filled their RAM with completely useless cached files (part of a 1 GB+ AVI for example, that I will most likely not be watching again for several months), swapping out all the programs I have running.
Both OS's really need to learn how to deal with Slow I/O. If I/O is only being done at a rate that is a fraction of my harddisk speed (say 300-400 kB/sec), which occurs for stuff like watching a movie, playing music, serving an upload over DSL, then this data is really not worth caching for longer than a few minutes. Even if I do need it again, it will probably again be at just 300-400 kB/sec, something a harddisk can take care of quite comfortably.
--Swilver
Photoshop...etc (Score:2, Interesting)
nocache directive (Score:5, Interesting)
Once in a while I'll do something like 'grep -r "oops" /big/filetree'. The fact of the matter is that I'm probably only reading any of that data ONCE, and it's not going to all fit in memory anyways, so I don't even gain anything if I run the grep a second time.
In a situation like that, I'd like to have some sort of 'nocache' directive that says 'Don't waste the cache with this'.
Something else that might help would be to have some sort of 'minprog' directive which would tell the swapper that a certain amount of space is reserved for 'program' data (i.e. code (including shared libs) and data), -- and that that memory shouldn't be swapped out in favour of something otherwise being read from disk. I think that this might avoid the situation that I sometimes run into of a large program (mozilla/gimp) being unresponsive after I do some other disk-intensive task (like the aformentioned recursive grep).
Things like the OS enforcing things like the RSS rlimit hints would also help. (I hadn't previously realized that it didn't).
My experience (Score:5, Interesting)
My boss started worrying that we weren't going to be able to deliver what the company had contracted to deliver. He was the antithesis of a PHB and so he sat down and in a few hours wrote a small driver to emulate the overall task the project had to accomplish. No detail, just broad brush emulation. He was able to demonstrate with a few lines of code that nothing we could do would hit the delivery spec. Burroughs responded by doubling the amount of RAM on the box as well as installing RAM that was twice as fast as what they had initially delivered. The combination enabled us to turn off swapping and deliver a working product.
Fast forward to 2004 and I'm working on Excel spreadsheets that have 60-70 sheets in a workbook. Saving the book is a bitch - 15-20 second wait after I hit ctrl-S. Every so often, Excel just goes away as it performs a prophylactic background save just in case Excel dies. 15-20 second pauses because the software has become so bloated that saving a 2-3 meg document is an excuse to flog the poor drive into a seek frenzy. The drive, which was about 4 years old, finally gave up the ghost. Its replacement has an 8 meg cache separate from the 512meg Windows manages - that "little" 8 meg junk of RAM belongs to hard drive alone. Night and day performance difference. The Excel swap frenzies that were induced by a simple ctrl-s are gone. 3 meg documents save in under a second - just what you'd expect from a drive that has a transfer speed in excess of 60 mbytes/sec.
My sense is that swap has always been a kludge. It's an attempt to squeeze more data into a machine that has only so much space. The working set graphs look pretty but they seldom describe what is happening day to day. Trading 2 nanosecond response for a 5 millisecond seek is seldom going to be a good trade. Bottom line from that OS class 35 years ago? Keep your working set size less than your physical memory and your machine will remain responsive. Just what the old IBM Geezers were saying in the first place.
Re:IMHO (Score:3, Interesting)
Is there anyone willing to take two identical machines and run a full Gentoo compile, with or without swap, with 256, 512 and 1028MB RAM installed, and time it? If swap really does make a difference, I think that sort of thing would help tell when swap is or is not useful in currently available systems. I'd love to do it but I simply don't have a good internet connection to do it.
segregation Re:No more swap! (Score:4, Interesting)
Whilst that's strictly true, some modern languages use generational garbage collectors that segregate objects in memory according to age. Only when an age group gets full do they sweep through an age group, and move any surviving objects up to the next age group.
This heuristic works exceptionally well, and runs fantastically quickly, and triggers significant swapping hardly ever.
There are some circumstances where it runs slowly, but in the worst case the performance is similar to simply doing a full garbage collection. These situations are pretty rare; objects generally segregate very well into young/old or young/middle aged/old categories- the vast majority of objects die very young.
Sad isn't it.
hmm (Score:2, Interesting)
Re:No more swap! (Score:3, Interesting)
A generational garbage collector does not always have to check every single live object. It allocates objects in different generations (e.g. a generation may be every 5M of memory allocated by the GC) and the newest generation is scanned first and older generations are only scanned if memory cannot be found in the nearer generations.
This (mostly) alleviates the problem with straining swap because the GC is mostly scanning recently allocated memory that is probably still resident.
Re:swap deals with bloat (Score:3, Interesting)
Re:If You have enough RAM (Score:4, Interesting)
Sure, add more RAM. But swap will always be useful, because there's always some stuff which is better off on the disk, because it hasn't been used in forever, and until your RAM is larger than your HD, you'll get better milage out of that RAM if you use it as a cache.
Re:You've some good points... (Score:2, Interesting)
Not Windows XP for sure
Scotty