Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Is Swap Necessary? 581

johnnyb writes "Kernel Trap has a great conversation on swap, whether it's necessary, why swapless systems might seem faster, and an overall discussion of swap issues in modern computing. This is often an issue for system administrators, and this is a great set of posts about the issue."
This discussion has been archived. No new comments can be posted.

Is Swap Necessary?

Comments Filter:
  • IMHO (Score:3, Interesting)

    by rd4tech ( 711615 ) * on Saturday May 29, 2004 @01:05AM (#9283378)
    One can have 1GB of RAM for a fairly cheap price.
    I really doubt that majority of newest desktop PCs need to swap on the HD at all.

    The unused/used portions argument from the article isn't quite true. You don't have to swap every unused bit,
    if you have enough RAM, leave everything there. It's R-A-M. don't access parts you don't need.
    If you don't have them in the RAM, read them from the drive,
    don't waste time putting them where they mostly are in the first place.

    I'm willing to bet that people who need performance, don't often run 10 applications at the same time. If they do, they
    surely know what are they doing.

    IMHO the average user should get enough RAM and no swap, let the OS optimize things a bit.
  • no swap? (Score:4, Interesting)

    by hawkeyeMI ( 412577 ) <brock&brocktice,com> on Saturday May 29, 2004 @01:09AM (#9283398) Homepage
    I ran linux without a swap file on 128 MB of memory a couple of years go. It was an accident, I didn't create a swap partition. I never had a problem (forutnately). Of course, I wasn't doing the heavy duty stuff I am now (scientific computation).
  • by Anonymous Coward on Saturday May 29, 2004 @01:15AM (#9283422)
    Not for everyone. I've got 1GB in my machine, and I don't think I've ever come near maxing it out. I've actually turned off the pagefile* in Windows and haven't had any problems other than Photoshop whining everytime I start it (even if it never uses more than 100MB of RAM, it still whines if there's no pagefile present).
    I don't use linux, so I can't say how well it'd work on my machine without swap, but I can't imagine it'd be any worse.

    * For the Windows-ignorant: a pagefile is the Windows equivelent of swap.
  • by Julian Morrison ( 5575 ) on Saturday May 29, 2004 @01:16AM (#9283426)
    Sometimes, when a process goes haywire, it will start munching RAM. If important programs like, say, sshd or X, can't malloc when they need to, they'll die ignominiously. Swap gives you the chance to kill the rogue process before your OS goes kaput. Its slowness can actually help for this.
  • by robslimo ( 587196 ) on Saturday May 29, 2004 @01:17AM (#9283428) Homepage Journal
    but today's production, heavily loaded system will still need the ability to swap to/from disk.

    Already, there are systems that minimize that need, set-top boxes, embedded systems in general. But each of those is seriously modified (kernel-wise, mostly) to achieve the responsiveness, the frugality of resource treatment that a general purpose desktop computer can't expect to enjoy.

    That doesn't mean that developers should stay in the same rut, assuming that hardware that confined system design in the '60s, '70s... '00s will perpetually assign similar constraints.

    IMO, desktops still need to swap... for now. but let's not paint ourselves into a performance corner.
  • Re:IMHO (Score:5, Interesting)

    by Trepalium ( 109107 ) on Saturday May 29, 2004 @01:22AM (#9283446)
    The other side of this is that memory that is not being used is wasted. Getting unused memory out of RAM, and into swap, so that memory can be used for real work can improve performance. This isn't just about memory that your applications are using. It's also about memory that is being used as cache for the disks you're using.

    Maybe you have enough memory to run your program, but you don't have enough memory to keep enough directory structures into RAM, so you keep needing to read the disk. If there are unused pages in that program that were only used once during startup, for example, it makes sense to get them out or memory, so that memory can be used for disk caching instead.

    Now, you have to understand how Linux handles paging, too. Unmodified pages from executables that are running may be discarded by the kernel at any time, because it knows where to get them. They won't be thrown into swap because it's not necessary. On the other hand, if that particular page has been modified (and some are modified as they are loaded by ld.so, for example), then the page must be copied into swap before it's discarded.

  • Try this with linux (Score:5, Interesting)

    by arvindn ( 542080 ) on Saturday May 29, 2004 @01:23AM (#9283451) Homepage Journal
    Notice how sluggish the system is after doing something disk-intensive like watching a movie. That's because the kernel is caching as much of the movie as possible to memory and swapping your running apps out. And kernel developers think this is a good thing, so it isn't going to change any time soon. IMHO for a desktop system this makes no sense, that's why I run my 1GB RAM machines with zero swap.
  • I just don't get it. (Score:3, Interesting)

    by mcg1969 ( 237263 ) on Saturday May 29, 2004 @01:26AM (#9283465)
    Seriously, I don't get it. How in the world can swap ever increase performance.

    Specifically, suppose I have one computer with 1GB of RAM and 1GB of swap, and another computer with 2GB of RAM and no swap. Under what circumstances will the first computer be any faster?

    Now I suppose if the swap is used for other things besides memory space then I could understand it. But then it seems like a simple solution would be to allocate a fraction of RAM for those things. In effect, create a swap partition on a RAM disk :)

    Seriously, I'd appreciate some education here, but make sure you answer my specific scenario above if you reply... thanks
  • by Stevyn ( 691306 ) on Saturday May 29, 2004 @01:27AM (#9283468)
    This may be slightly off topic...

    Running KDE 3.2.1 now, I notice it takes longer to open apps than it does in windows. Mozilla for example takes literally a few seconds longer to open each window than it did in windows. Another thing windows does is make it faster when you run an app right after you ran it then closed it. Say for example in windows I run mozilla, then close it, then open it. When it opens it the second time, it's almost instant. However in linux, it seems to take the same original amount of time to load it completely. I'm sure it has to do with an entirely different process of loading programs, but apps always seemed to open faster in windows than in linux, in my view.

    Then again, graphics used to be in the NT kernel and that's what made it appear fast, but lead to a lot of problems and crashes, so maybe the longer load time is worth the wait when compared to a reboot.
  • by robbo ( 4388 ) <slashdot@NosPaM.simra.net> on Saturday May 29, 2004 @01:28AM (#9283470)
    It's all diskdrake's fault. I didn't choose my swap size, afair, although it's weird it's smaller than my RAM. Beyond that, acpi recommends you have swap about 30% larger than your RAM. While it would slow down the suspend, I don't see why acpi doesn't pipe /dev/mem through bzip first, or for that matter, why hibernate can't just dump to a file.

  • by Goldberg's Pants ( 139800 ) on Saturday May 29, 2004 @01:29AM (#9283478) Journal
    If you've just got a box sitting not doing much, in other words not serving pages, SQL or whatever, you can run with minimal ram. My laptop has 24 megs of ram. I did have a 100 meg swap partition, but needed the space for a particular huge DOS game I wanted installed, so nuked it and converted it to DOS. Booted Linux and checked the ram usage and most of the ram was used.

    However, when I ran a program, the amount of used ram DROPPED.

    Of course in an environment where the system gets hammered, it's all very well talking about how cheap ram, but so is hard disk space. Is it really worth not setting up a bunch of swap space? What if a rogue process munches it's way through the ram while you're away? Would it not be better to have swap space and have it so the system can run, albeit not very well, than just die on you?

    I don't know, I ain't a sys admin, but performance issues aside, I don't see why you should risk it. I'd rather have swap partitions on a hardcore system than not.
  • It's a choice... (Score:5, Interesting)

    by Beolach ( 518512 ) <beolach&juno,com> on Saturday May 29, 2004 @01:38AM (#9283526) Homepage Journal
    As I RTFA & previous comments here, I was rather suprised at how argumentive people were getting over this. Some people are saying swap is an absolute necessity & a swapless system was a broken system, while other's said swap was an obsolete solution to a problem that no longer exists (expensive RAM). This seems odd to me, because as far as I can tell, the decision of whether & how much swap to use is based mostly on two things: specific situations (and thus there is no general answer to 'Is Swap Necessary?'), and opinion. And either way, with the Linux kernel today (and for quite a while now), I can choose for myself whether or not, and how much, swap I want to use. So if I am in a situation that I think requires swap, I can use it, and in a situation that I think would be hurt by having swap, I don't have to use it. So I don't see why there's so much hoolabaloo about this: nobody is forcing anyone to do it one way or the other. And if someone else thinks it should be done different from how I would do it, that's their decision, not mine.
  • by Anonymous Coward on Saturday May 29, 2004 @01:42AM (#9283546)
    This might not be such a funny comment considering that why they swap improves performance is

    "well it is a magical property of swap space, because extra RAM doesn't allow you to replace unused memory with often used memory. The theory holds true no matter how much RAM you have. Swap can improve performance. It can be trivially demonstrated."

    Wouldn't having a swap drive in ram improve the overall performance of having a swap drive and still keep the above true?
  • Amiga (Score:5, Interesting)

    by Jace of Fuse! ( 72042 ) on Saturday May 29, 2004 @01:44AM (#9283566) Homepage
    In the 90's, I ran a 10 line BBS on an Amiga 4000 with 16 megs of Fast ram, 2 megs of Chip ram, and 0k for the swap file. :)

    I know, I know, the Amiga didn't HAVE virtual memory. Well actually it did if you had an 040 and installed a memory management program such as GigaMem, but so few people had a use for such a thing that it was practically unheard of.

    Oh, and before someone jumps in saying that I wasn't able to do anything else, that is totally NOT the case.

    Very often I was doing lots of stuff. The difference is developers were used to working within memory constraints, and now days developers are used to systems growing into the applications.
  • Re:swap rule! (Score:5, Interesting)

    by Majix ( 139279 ) on Saturday May 29, 2004 @01:55AM (#9283612) Homepage
    The "swap=2x RAM" thing is obsolete admin trivia that simply refuses the die. It comes from the days when physical RAM was mapped into swap to simplify the swapping algorithm. If you didn't have at least a 1:1 correspondence between RAM and swap performance would suffer immensly. Starting with Linux 2.4 and up this is simply no longer true, there is no benefit from using excessively large swap partitions. Same goes for Sun OS and the BSDs these days.

    Instead, the swap needed depends on the sort of usage pattern your machine has. If it's a desktop with 1-3GB of RAM, a swap partition of 1GB is completely adequate. Want the machine to swap as little as possible and utilize all the RAM, so turn down swappiness a bit to avoid Mozilla/Firefox from being paged out when you leave for 15 minutes.

    On a server you need a whole lot more swap, the more the better. Not because it's necessarily any faster, it might be slower in fact with a high swappiness setting the system decides you don't really need that 2GB DB in memory if it's been unused for a month. But when you do run out of memory in legitimate use, the shit will really hit the fan if there isn't enough swap to pick up the slack.
  • Re:swap rule! (Score:5, Interesting)

    by Majix ( 139279 ) on Saturday May 29, 2004 @02:05AM (#9283648) Homepage
    I forgot to explain swappiness. This is a entry in proc, /proc/sys/vm/swappiness, that you can plug a numerical value between 0 and 100 into. The higher the number, the more eager Linux will be to swap out applications from RAM to disk. There's a lot of conflicting opinions on what values you should use. Kerneltrap had a good article [kerneltrap.org] on it recently.

    Personally I use a value of around 20 or less for desktop machines. This keeps Mozilla being paged out after a short while, that really shouldn't be happening on modern hardware. Too bad you can't achieve the same effect in Windows 2000. Some people swear that a swappiness of 0 is ideal for their desktops, your mileage may vary. It's fun to play with in any case, any changes you make take effect instantaneously.
  • by Festering Leper ( 456849 ) on Saturday May 29, 2004 @02:08AM (#9283662) Homepage
    there's a definite pattern with regard to swap in the windows world.

    for win'9x: use up ram until almost gone then start allocating swap space in anticipation of actually using it. should memory allocation still be increasing then actually use swap space. reverse the order when freeing memory.
    i had 384 megs ram at the time and as long as i used less than about 350 megs total the system wouldn't be in swap.

    for win 2k & xp: (when within physical ram limits) whatever amount of memory is requested, allocate between 60-80% to ram and the rest to the swapfile. even the disk cache partially goes to swap! i didn't believe it at first but all one has to do is look at the numbers in the task manager's memory/cpu window. at first i figured that all i'd need to do is throw in some more ram and the disk thrashing and absolute crawl would go away. i put in a gigabyte of ram (i never allocate more than 700 megs at most and the total system memory usage on bootup is 100 megs). even with the extra ram the problem stayed the same.

    turning off swap gives me consistent fast performance, and since the disk cache isn't swapped (partially) i get 2x the throughput i had with a swapfile on large file copy operations

    machine tested: duron 1.3ghz, 1 gig pc133 ram, 2x 80 gig wd800jb hdd.. os win2000 & winxp running newsbin which allocates disgusting amounts of ram in a large header grab (yeah i could have used a test program but why do that when newsbin is a real-world test for me). the os and applications are on different drives on their own ide chains

    with swapfile enabled (size=1.5x system ram).
    allocation time: unaffected, only the time to perform task reqested
    memory de-allocation time: (by either quitting app or selecting another group) 23 MINUTES of constant disk thrashing

    with swapfile DISabled
    allocation time: unaffected, only the time to perform task reqested
    memory de-allocation time: (by either quitting app or selecting another group) 2 seconds

  • by martin-boundary ( 547041 ) on Saturday May 29, 2004 @02:14AM (#9283674)
    I think this is the most interesting issue hinted on the mailing list.

    There are two "theorems" quoted: The first says that no matter what, if you have a size X of RAM used by the OS, and you add a size Y swap disk, you get better OS performance than if you only had X RAM.

    The second "theorem" says: if you have X RAM + Y swap disk, then add Y RAM and use that instead as the swap disk, then you get *faster* performance.

    The naysaysers now say that the second statement is misleading. Why? Because with X+Y RAM and Z swap disk, you'd get better performance again.

    I think this betrays an underlying assumption which I'm not sure is true, namely: X+Y RAM managed by the OS any way it likes is always better managed then X RAM managed by the OS any way it likes and Y RAM reserved for swap operations.

    In fact, let us suppose that the OS memory management is not optimal, ie when the OS manages X+Y amount of RAM, it does so suboptimally. Then it is possible that a different memory management scheme, e.g. X RAM used normally + Y RAM used exclusively for swap, may turn out to better use the available total RAM.

    So the theoretical question is this: is Linux's memory management sufficiently optimal that with an ordinary set of applications running, it can always make better use of X+Y amount of RAM than if it always reserved Y for swap? Alternatively, under what kind of running application mix is it true that reserving Y amount for swap yields a better memory management algorithm than using X+Y fully?

  • why not have swap? (Score:1, Interesting)

    by Anonymous Coward on Saturday May 29, 2004 @02:23AM (#9283698)
    For most home users who can get 200+ GB of disk relatively inexpensively what's the big deal about giving up (say) 1GB for swap?

    There's also the use that if for some reason a system panics (hey, it happens) you have a place for the kernel to dump to. This can be valuable in helping debug what happened with a backtrace.
  • by menscher ( 597856 ) <menscher+slashdotNO@SPAMuiuc.edu> on Saturday May 29, 2004 @02:30AM (#9283718) Homepage Journal
    Why not more? Because that's the largest a swap partition can be. Why not less? Because disk is cheap. It has little to do with the amount of ram in the machine either, because it's easy to add more ram, but a bit harder to repartition for more swap.

    Here's a real-life example of why swap is useful. One machine I manage has a gig of ram. At the time of purchase, that seemed quite reasonable. But the users are working on a project that takes 2 gig of ram. So currently it's using a gig of the swap. Yes, that's bad, and I'll be adding a second gig to it in a few days (it's in the mail). But in the mean-time, that swap space is really handy. It means the users can get their work done! Think of the first 256M of swap as being for speed. If you're regularly using more than that, then it's time to order more ram. But it's nice to have the spare gig of ram for odd jobs, or while you're waiting to install it.

    I'm no expert, but I think a lot of these arguments could be resolved if people took advantage of the ulimit constraints. If you can limit how much a program can get out of control, then there's no longer a concern for a single user sending the server into swap hell. One of my current projects is to figure out reasonable limits.

  • by Black Parrot ( 19622 ) on Saturday May 29, 2004 @02:36AM (#9283734)
    This may be slightly off topic...

    Running KDE 3.2.1 now, I notice it takes longer to open apps than it does in windows. Mozilla for example takes literally a few seconds longer to open each window than it did in windows. Another thing windows does is make it faster when you run an app right after you ran it then closed it. Say for example in windows I run mozilla, then close it, then open it. When it opens it the second time, it's almost instant. However in linux, it seems to take the same original amount of time to load it completely. I'm sure it has to do with an entirely different process of loading programs, but apps always seemed to open faster in windows than in linux, in my view.

    Then again, graphics used to be in the NT kernel and that's what made it appear fast, but lead to a lot of problems and crashes, so maybe the longer load time is worth the wait when compared to a reboot.
    Conventional wisdom is that Windows uses lots of hacks to make it "look" faster in the way you describe, without regard to the cost it imposes on other operations. I'm almost certain that XP keeps some applications in memory after you "exit" them. Sometimes I notice that something won't work after running certain big applications, suggesting that sufficient resources haven't been released. Also, sometimes a shutdown complains about an application that won't respond even after you've closed everything. I think they're hoaxoring people to think they got a fast system, when they're really just robbing Peter to pay Paul.

  • Just FYI (Score:2, Interesting)

    by slittle ( 4150 ) on Saturday May 29, 2004 @02:57AM (#9283792) Homepage
    I've run my Linux systems without swap for years (since 2.2) without any problems. Of course, I make sure I have way more RAM than I am likely to need (the stuff is practically free these days; but OTOH, so is HDD space.....).

    Simply put, you need enough 'memory' to hold all the stuff you want to run, plus caches. For a given task, you might go for a system with 512MB RAM and a 512MB swap, and I'll just go for 1GB RAM and forget the swap. The only difference is that if/when your system comes up on its RAM limit, it's going to start slowing down. When it starts using a lot of swap, it's going to crawl. But it'll still run. Until you run out of both.

    Mine will run like blazes upto the 1024MB limit, then barf. No warnings like with swap.

    So if you want an early warning sign, use swap. If your needs are well known and won't push beyond the limits of your hardware, don't bother.

    You can always add a swap file later/only the fly as your needs change anyway.

    dd if=/dev/zero of=/data/swap bs=1M count=512
    mkswap /data/swap
    swapon /data/swap
  • One of the tricks.. (Score:2, Interesting)

    by Sir Pallas ( 696783 ) on Saturday May 29, 2004 @03:13AM (#9283823) Homepage
    ..that I think is spiffy is using the partition I would normally use for /tmp as . Then, I mount a tmpfs of that size on /tmp. This makes a large performance improvement for anything that uses a lot of temp space, because everything /tmp would normally handle is done in RAM until RAM fills up, at which point we're back to using the disk.
  • by Listen Up ( 107011 ) on Saturday May 29, 2004 @03:22AM (#9283841)
    They are not theorems, but conjectures. A theory and a conjecture are not the same thing. No one to date as posted a theory.
  • by kasperd ( 592156 ) on Saturday May 29, 2004 @03:45AM (#9283889) Homepage Journal
    Yes, it is swappnig because it is trying to free up "low memory", of which you have less that a gig.

    Actually this sounds likely, but is it a good idea? Alternatively it could do a memcpy of your data from low memory to high memory. So now you have the choice between occupying the CPU to perform the memcpy, or occupying the disk controller to swap it out. But data that you could swap out is process memory, which you'd expect to be allocated from the high memory. So how do you actually reach a situation where process memory pages end up in low memory? You'd have to fill up the high memory first. Of course if you run a program that requires a lot of memory, which is all allocated from high memory, then other programs might get low memory. When the first program terminates you could have a lot of free high memory and other programs still taking up the low memory.
  • by pantherace ( 165052 ) on Saturday May 29, 2004 @03:50AM (#9283901)
    As I recall (it hasn't been recently) The chips that were designed had the capability to address 1MB of RAM (8088 & 8086). IBM reserved the top 384KB or so for ROM & system calls. So the person was left with (1024KB-384KB) 640KB for using as actual RAM.

    That's part of the reason why the 4GB addressing limit matters, and really x86 is hurt badly performance wise if you have more than a GB or 2, even below the physical 4GB limit (which can be extended via Intel's extensions (this limit doesn't exist in native AMD64, or Intel's semi-copy of that.)): now x86 relies on paging & virtual memory spaces, with upper addresses reserved for libraries, & kernel calls. This mapping may take up a fair amount of space, and when manipulating large data sets (very large images, Databases & other stuff)... this becomes problematic, because of the virtual 4GB limit. The physical limit may not have been reached, but the virtual limit is. Doesn't mean more RAM isn't faster, but it does mean that there is a speed hit in some cases.

    If Bill Gates said it (he has denied it, but it's been around a LONG time), it may have even been something resigned possibly preferenced with an "Oh well, " or something like that.

  • Re:No more swap! (Score:3, Interesting)

    by egomaniac ( 105476 ) on Saturday May 29, 2004 @04:22AM (#9283978) Homepage
    Generational garbage collectors, such as the one used in the JVM, screw up swap. It seems like there is a conflict between what the OS is trying to do with swap and what the JVM is trying to do with GC. I would rather let the GC win in this so the application runs fast.

    You are absolutely correct that garbage collectors play hell with swap. It's pretty easy to understand why: to determine what is garbage and what is not, the garbage collector has to check every live object and see what they hold pointers to.

    Think about that one for a sec -- the garbage collector has to look at every single live object on the heap during every garbage collection pass. This means that any pages which were swapped out have to be fetched from disk, so you end up (usually) loading the entire heap back into memory during garbage collection.

    The aforementioned problem is true of all accurate garbage collectors. The other problem depends on the exact sort of garbage collector, but in general live objects are moved around in order to clean up holes in the heap (think of it like compacting a database). This can give you another "scan the entire heap" situation.

    The only real exception to this rule is that large data structures (such as the pixel data for an images) that do not contain pointers and thus do not have to be examined can remain swapped out if they aren't relocated during a particular garbage collection pass. The first page of the data structure must always be loaded no matter what, hence the "large" (really, multi-paged) disclaimer.

    An OS based on a GCed language such as Java will probably have to come up with some really innovative tricks for managing swap, or just do without.
  • by dabraun ( 626287 ) on Saturday May 29, 2004 @04:58AM (#9284065)
    One of the things that makes XP start apps very quickly is this:

    It watches applications startup and monitors what they read from the disk - it notes this in a log. During idle time it moves the sectors around on disk so that they will all be in the same place for the next time you start that app. When you start the app later it runs out and reads everything that it believes the app will want to read all at once. This pre-reading and disk order optimization makes XP start apps a heck of a lot quicker than previous versions of the OS did.

    It also does exactly the same thing for the boot process. There's even a tool you can download from MS's web site that will allow you to force the system to clear what it thinks about the boot process, reboot, and force the ordering to take place immediately rather than during idle time.

    David
  • by Lumpy ( 12016 ) on Saturday May 29, 2004 @05:13AM (#9284089) Homepage
    IMHO for a desktop system this makes no sense, that's why I run my 1GB RAM machines with zero swap.

    fine for you being a typical home user not doing much with your PC.

    now with me editing 4GB video clips, rendering a 2Gb CG clip or trying to process a large rotoscoping project in film-gimp 1GB of ram is consumed 3 minutes after I sit down at that machine.

    I have 2GB of ram + 4GB of swap and I can easily fill it all up using wither Blender, Film-Gimp or any of the other tools I use.

    and I'm betting that many others here that actually use their computers for real work instead of what they typical home user uses it for will also chime in.

    Basically, when you hit the top of your ram... hell will break loose on your machine... I can't afford to lose my work when I run out of ram, so I use swap to get more done.
  • by Alioth ( 221270 ) <no@spam> on Saturday May 29, 2004 @05:19AM (#9284099) Journal
    The 2.6 kernel now has a swappiness setting in /proc where you can tell the kernel avoid swapping please (set it to zero) or swap like mad (set it to 100). Therefore you can tune your system to your specific needs. It'd be nice if they had a similar control for filesystem cache.
  • Re:IMHO (Score:3, Interesting)

    by Amorpheus_MMS ( 653095 ) <amorpheusNO@SPAMgmail.com> on Saturday May 29, 2004 @05:50AM (#9284150)
    Setup B) 2GHz Intel, 1gb ram, 40gb hdd, swap 2gb.
    Taking out the swap in that machine and the system ran fine. Even running Half-Life: Counter-Strike via WineX by transgaming.


    Do try that with Far Cry, I'm curious whether you'll notice a difference there. That recommends 1GB RAM, and swapping unused memory is certainly considered.

    Personally, I think games are the one reason why swap is still very useful. You either run your programs on your desktop, or a game - not both. Getting enough RAM to hold everything is wasted money.
  • Reasons for swap... (Score:5, Interesting)

    by emil ( 695 ) on Saturday May 29, 2004 @06:03AM (#9284172)

    I don't know if Linux works this way, but...

    1. The mmap() system call, which allows you to treat a file as an area of memory and manipulate it with pointers in C, oftentimes copies (portions of) the file into swap.
    2. Many systems, when you execute a binary obtained over NFS, will cache the binary in swap in hopes of preventing further transfers over the network.

    UNIX kernels have assumed the availability of swap for nearly 35 years. You cannot remove this major architecutural feature without unintended side effects.

  • by kasperd ( 592156 ) on Saturday May 29, 2004 @06:17AM (#9284205) Homepage Journal
    True fragmentation of memory does not mean performance decrease. But fragmentation can still be a problem in some cases. For example if you need to allocate multiple pages at once, and only have scateret single pages available, the allocation would either fail, or you would have to free some memory. This is one of the reasons the task struct + stack allocation in Linux have finally been reduced from two to just one page (on x86 that is, I don't know about other architectures).

    Another possibility, that has been suggested for 2.7 is defragmentation of memory. Of course just because it has been suggested doesn't mean it is going to happen. Without defragmentation, what are your options to satisfy a larger allocation in case of fragmented memory? You'd have to free some memory, either by reducing the disk cache size, swapping out anonymous pages, or find some slabs that can be freed. But notice that with more memory there would be more possible choices for what to free, so it would have a better chance of picking something you won't need in the near future. If you used some ram for a ramdisk for swap, then you should not expect to use all of that ram. So effectively you are using less of your memory, which again could mean smaller chance of finding what you need in RAM. The failing allocation might as well be satisfied by removing a page from the page cache, which is certainly less disirable.

    Of the given suggestions defragmentation of memory is probably the only that shouldn't cause performance problems in specific corner cases. (Me thinking back on the good ol' days with MS DOS and AmigaOS where the only solution to memory fragmentation was a reboot).
  • A swapless system won't be faster for the same workload, usually the contrary, in fact, since lack of swap denies the system the opportunity to optimize RAM hit ratios.
    Agreed. This is the real reason to have swap space, so you can run more applications than you have resources for. It also allows running applications to push ones that are not doing much out of the way while they are stalled (waiting for a resource) or otherwise not running (a process like, say, a database server that is sleeping until it gets a record or SQL request) and thus not use resources while not operating.
    As for the idea of putting swap on a RAMdisk, it is completely brain-dead (unless you have exotic memory arrangements such as NUMA) - the kernel is going to waste a lot of time copying memory from the active region to the ramdisk region and back. A straight swapless system will be preferable.
    On this I am going to have to disagree with you. If you have some swap, the system can move least-used pages out of the way as it runs out of primary memory or as it notices rarely-running processes that can be shunted off to release primary memory to processes that are running. Disabling swap altogether means that the system has to run out of resources, attempt to swap them out, discover it has no swap, then kill something or refuse to honor a request for more resources to make room. With swap, even if it's to a ramdisk, the system can remove processes at the high-water mark and if the hits aren't too high, it is conceivable that better performance might occur over a system that simply has no alternative but to hit the hard limit and run out of resources as opposed to crossing a soft limit and not ending up in a starvation condition.

    Under normal circumstances it would make sense that having all available memory would make more resources available than stealing some to make a virtual memory swap space, but as most operating sytems are designed to swap pages out as they are unneeded or when processes start to hit the high-water mark, the overhead of the swap manager running and being unable to do anything due to no swap at all just might be higher than the small amount of time needed to do some unnecessary copying of memory to swap out some rarely-used pages.

    Short of someone running a test on a machine with no swap at all vs. say a tiny amount of ram used as a ramdisk (say 5 meg on a 1 GB machine) it's probably an academic argument to say flat out that no swap will always provide better peformance than swap to ramdisk, especially if the kernel is designed to expect to be able to have swap around.

    If the kernel is designed to only swap out on resource shortages and not to optimize running processes as well, then swap to Ramdisk is a brain-dead operation. But I suspect the actual method of operation is a little more complicated than mere copy-on-resource-shortage, and thus it is conceivable that swap-to-ramdisk may provide better performance than no swap at all.

    Paul Robinson <Postmaster@paul.washington.dc.us [mailto]>
  • Swap Partitions (Score:3, Interesting)

    by HeghmoH ( 13204 ) on Saturday May 29, 2004 @07:28AM (#9284324) Homepage Journal
    I haven't touched Linux for several years, although I used to do serious work on it.

    I take it from the tone of the discussion that Linux still uses separate swap partitions? Why? My main machine now runs OS X, which swaps into the filesystem, and that seems to work a lot better. The system can decide what it needs to use, and I don't have to make a decision. I recall that Linux supports swap to the filesystem, but it sounds like nobody actually uses this feature. I can somewhat understand a server using a swap partition, since the needs of a server would be more or less known in advance and I assume it's marginally faster, but I don't see any reason to use one on a desktop machine. Why is everybody still using dedicated swap partitions?
  • by swilver ( 617741 ) on Saturday May 29, 2004 @07:54AM (#9284369)
    I have a Windows XP and a Linux box (2.6.4 kernel), both with 1 GB and both running without swap. The reason for this is simple; when I have my systems running a while doing nothing but serving files (slow downloads, or simply watching a big 2 hour movie), the machines will both be totally unresponsive when I get back to get some real work done; literally everyhing needs to be swapped back in because the machines use like 800 MB of it for cache buffers.

    Both OS's have filled their RAM with completely useless cached files (part of a 1 GB+ AVI for example, that I will most likely not be watching again for several months), swapping out all the programs I have running.

    Both OS's really need to learn how to deal with Slow I/O. If I/O is only being done at a rate that is a fraction of my harddisk speed (say 300-400 kB/sec), which occurs for stuff like watching a movie, playing music, serving an upload over DSL, then this data is really not worth caching for longer than a few minutes. Even if I do need it again, it will probably again be at just 300-400 kB/sec, something a harddisk can take care of quite comfortably.

    --Swilver

  • Photoshop...etc (Score:2, Interesting)

    by Thaidog ( 235587 ) <slashdot753@@@nym...hush...com> on Saturday May 29, 2004 @08:55AM (#9284495)
    Any machine that deals with large files will still need swap space... Photoshop when dealing with large image files...etc
  • nocache directive (Score:5, Interesting)

    by Stephen Samuel ( 106962 ) <samuel@bcgre e n . com> on Saturday May 29, 2004 @09:34AM (#9284600) Homepage Journal
    One of the errors that I see is Linux doesn't handle the read-once case very well.

    Once in a while I'll do something like 'grep -r "oops" /big/filetree'. The fact of the matter is that I'm probably only reading any of that data ONCE, and it's not going to all fit in memory anyways, so I don't even gain anything if I run the grep a second time.

    In a situation like that, I'd like to have some sort of 'nocache' directive that says 'Don't waste the cache with this'.

    Something else that might help would be to have some sort of 'minprog' directive which would tell the swapper that a certain amount of space is reserved for 'program' data (i.e. code (including shared libs) and data), -- and that that memory shouldn't be swapped out in favour of something otherwise being read from disk. I think that this might avoid the situation that I sometimes run into of a large program (mozilla/gimp) being unresponsive after I do some other disk-intensive task (like the aformentioned recursive grep).
    Things like the OS enforcing things like the RSS rlimit hints would also help. (I hadn't previously realized that it didn't).

  • My experience (Score:5, Interesting)

    by jmichaelg ( 148257 ) on Saturday May 29, 2004 @09:43AM (#9284632) Journal
    My first job as a sysadmin was on a Burroughs 7700. My employer sent me to a week long class on tuning the os to help the company deliver a turnkey app that met some performance specs. Didn't matter what I did to the working set/swap settings - the thing was pig slow. The older guys in the class who had admin experience on IBM 370's were constantly complaining that the Burroughs OS was doing a worse job deciding how to allocate RAM than they could and it was making them look bad because the boxes wouldn't deliver the throughput they had had with supposedly inferior IBM hardware. As you can imagine, it was a very contentious class.

    My boss started worrying that we weren't going to be able to deliver what the company had contracted to deliver. He was the antithesis of a PHB and so he sat down and in a few hours wrote a small driver to emulate the overall task the project had to accomplish. No detail, just broad brush emulation. He was able to demonstrate with a few lines of code that nothing we could do would hit the delivery spec. Burroughs responded by doubling the amount of RAM on the box as well as installing RAM that was twice as fast as what they had initially delivered. The combination enabled us to turn off swapping and deliver a working product.

    Fast forward to 2004 and I'm working on Excel spreadsheets that have 60-70 sheets in a workbook. Saving the book is a bitch - 15-20 second wait after I hit ctrl-S. Every so often, Excel just goes away as it performs a prophylactic background save just in case Excel dies. 15-20 second pauses because the software has become so bloated that saving a 2-3 meg document is an excuse to flog the poor drive into a seek frenzy. The drive, which was about 4 years old, finally gave up the ghost. Its replacement has an 8 meg cache separate from the 512meg Windows manages - that "little" 8 meg junk of RAM belongs to hard drive alone. Night and day performance difference. The Excel swap frenzies that were induced by a simple ctrl-s are gone. 3 meg documents save in under a second - just what you'd expect from a drive that has a transfer speed in excess of 60 mbytes/sec.

    My sense is that swap has always been a kludge. It's an attempt to squeeze more data into a machine that has only so much space. The working set graphs look pretty but they seldom describe what is happening day to day. Trading 2 nanosecond response for a 5 millisecond seek is seldom going to be a good trade. Bottom line from that OS class 35 years ago? Keep your working set size less than your physical memory and your machine will remain responsive. Just what the old IBM Geezers were saying in the first place.

  • Re:IMHO (Score:3, Interesting)

    by Jeff DeMaagd ( 2015 ) on Saturday May 29, 2004 @10:09AM (#9284708) Homepage Journal
    I'm sick of the speculation. Maybe Linux has some key benefits that make swap useful on a machine that has more memory than it needs to operate. I'd like to see some evidence that those techniques actually make a jack of a difference or not.

    Is there anyone willing to take two identical machines and run a full Gentoo compile, with or without swap, with 256, 512 and 1028MB RAM installed, and time it? If swap really does make a difference, I think that sort of thing would help tell when swap is or is not useful in currently available systems. I'd love to do it but I simply don't have a good internet connection to do it.
  • by WolfWithoutAClause ( 162946 ) on Saturday May 29, 2004 @10:53AM (#9284836) Homepage
    The aforementioned problem is true of all accurate garbage collectors.

    Whilst that's strictly true, some modern languages use generational garbage collectors that segregate objects in memory according to age. Only when an age group gets full do they sweep through an age group, and move any surviving objects up to the next age group.

    This heuristic works exceptionally well, and runs fantastically quickly, and triggers significant swapping hardly ever.

    There are some circumstances where it runs slowly, but in the worst case the performance is similar to simply doing a full garbage collection. These situations are pretty rare; objects generally segregate very well into young/old or young/middle aged/old categories- the vast majority of objects die very young.

    Sad isn't it.

  • hmm (Score:2, Interesting)

    by mAineAc ( 580334 ) <mAineAc_____&hotmail,com> on Saturday May 29, 2004 @11:58AM (#9285085) Homepage
    After reading this I thought to look at my laptop. And what to my surprise I didn't have any swap. It turns out a couple of weeks ago I was playing with swsusp and it had corrupted my swap space and it wasn't loading. I never noticed but after the fact I realized some of the stuff that was going on was because of this. I tried openening a large pdf file and it was taking forever to load and seemed to almost lock up my system. Same thing happened when compiling a few programs and other things similar. I guess with 196 MB of Ram you want to have some swap :)
  • Re:No more swap! (Score:3, Interesting)

    by Unknown Lamer ( 78415 ) <clinton@nOSpAm.unknownlamer.org> on Saturday May 29, 2004 @12:00PM (#9285092) Homepage Journal

    A generational garbage collector does not always have to check every single live object. It allocates objects in different generations (e.g. a generation may be every 5M of memory allocated by the GC) and the newest generation is scanned first and older generations are only scanned if memory cannot be found in the nearer generations.

    This (mostly) alleviates the problem with straining swap because the GC is mostly scanning recently allocated memory that is probably still resident.

  • by spitzak ( 4019 ) on Saturday May 29, 2004 @12:10PM (#9285119) Homepage
    Unfortunatley that bloat is also *fragmented*. Even a 4-byte structure that is still in use buried in a page will keep it swapped in. In my experience the only way app pages get swapped out is when the app is idle.
  • by ceswiedler ( 165311 ) * <chris@swiedler.org> on Saturday May 29, 2004 @01:20PM (#9285417)
    Adding RAM always helps. No one ever says that swap is BETTER than RAM. Having X+Y RAM is better than X RAM + Y swap. However, having X+Y RAM plus Z swap is better yet.

    Sure, add more RAM. But swap will always be useful, because there's always some stuff which is better off on the disk, because it hasn't been used in forever, and until your RAM is larger than your HD, you'll get better milage out of that RAM if you use it as a cache.
  • by haggisman ( 682031 ) on Saturday May 29, 2004 @08:29PM (#9287397)
    >>Surely if your system runs out of RAM it shouldn't die? The runaway process, sure, but the OS should be able to reclaim some RAM from that and manage to carry on, no?

    Not Windows XP for sure .. I edited an ISO image using 2 different editors and instead of the editors barfing, XP froze solid each time.

    Scotty

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...