

Tuning Linux VM swapping 324
Lank writes "Kernel developers started discussing the pros and cons of swapping to disk on the Linux Kernel mailing list. KernelTrap has coverage of the story on their homepage. Andrew Morton comments, 'My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You really don't want hundreds of megabytes of BloatyApp's untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful.' Personally, I just try to keep my memory usage below the physical memory in my machine, but I guess that's not always possible..."
Ob. /. joke (Score:2, Funny)
God no... (Score:5, Interesting)
I absolutely despise the way that XP swaps out applications in order to make the disk cache larger. I have 1GB of RAM on my machine precisely so I don't have to wait two minutes for it to swap my web browser back in after it's swapped out... yet if I copy a 2GB file from one drive to another, the stupid operating system will swap out all the applications it can just to make the cache larger.
Please, please, don't take Linux down the same braindead route as Microsoft has done for XP. It's utterly insane to swap out my browser so that a 2GB file can be copied two seconds faster when I then have to wait two minutes for the browser to swap back in. Or at least provide some kind of '#define STOP_VM_SWAPPING_STUPIDITY' so that I can disable it.
Re:God no... (Score:3)
Re:God no... (Score:3, Informative)
Who thinks that writing a page to disk immediately wipes the ram it came from. This is how things really do work.
The problem is whether this paged but still valid ram should be given to other processes. I would say it is a tuning thing. There is no perfect method that will be best for all situations.
Whoever modded this insightful needs to do a little reading about operating systems.
Re:God no... (Score:2, Interesting)
I use XP extensively and it is very agressive at swapping stuff out. However, I've never had the problems with other applications besides Mozilla.
Re:God no... (Score:4, Interesting)
Re:God no... (Score:4, Interesting)
The only way to stop this madness on XP is to turn off the swapfile. I'd REALLY hate to see Linux go down this route. Big bloaty applications need to stay IN MEMORY unless there is memory pressure being exerted on the system. That is the only time swapping should occur.
Stupid Windows Kernal Swapping (Score:5, Insightful)
Whatever swapping scheme is used in Windows, I do not know, and I don't care what it's called either.
What I can't despice, is the fact that I got >300MB free physical memory, and 20MB of the kernel is still swapped. Result? Do this, do that (any minor thing) and you have to wait for it to swap in.
In the end, I have never ever seen a Windows-system without a partially swapped kernel, even with tons of free RAM available.
This is just plain stupid, or is there some sort of "smart" explanation for this?
I, for once, would hate having to turn off virtual memory, just to have the system kernel loaded at all times.... And GOD BE DAMNED if Linux takes the stame stupid design-decision.
Re:Stupid Windows Kernal Swapping (Score:3, Informative)
When you have a bunch of lazy, slacker, multi-megabyte services running in the background, waiting for that once-in-a-blue-moon event that requires their help (yes, I'm talking about YOU spoolsv.exe, you 3.98MB hog!), you might as well shove them into the swap file. Windows can end up with an unGODLY
Re:God no... (Score:5, Informative)
My kernel has autoswappiness enabled so it figures out the number on its own. I'm running at 64 ATM on a 256 Meg system (ram donations accepted)
Re:God no... (Score:3, Funny)
It's impossible to help people in XP.
Re:God no... (Score:4, Informative)
Re:God no... (Score:3, Funny)
Re:God no... (Score:3, Funny)
Comment removed (Score:5, Interesting)
Re:God no... (Score:2, Informative)
cat 0 >
and I have instant control as to the performance of my machine. In fact... I could even write wrappers to specific programs so that they can tune the system's swappiness to better suit them. I.E. Programs that use huge ammounts of memory, less swappy, programs with repetetive disk access more
Re:God no... (Score:2)
i have the same problem with Retrospec backup software. one run and XP swaps out everything, system, applications, Explorer, etc.., so i have to wait >5 minutes before i can use the system again. This is with 1 G of physical RAM installed.
the solution would be a limit to the disk cache to a reasonable size. i can see that servers would want all RAM for caching, but desktops? probably not. there should be a limiting percentage, like 10% of RAM. 100M is plenty of disk cache for my use...
be su
Re:God no... (Score:2)
NT is supposed to maintain *small* disk caches to avoid the situation you're talking about, where as linux has always had a less conservative policy of using pretty much all avaliable ram for disk cache and pushing things out when needed.
I would actually be pretty surprised if that was the case... the os SHOULDN'T kick programs out for disk cache except under extreme situations. For all the shit we give MS
Re:God no... (Score:3, Interesting)
Re:God no... (Score:2)
like
Re:God no... (Score:5, Informative)
Then, make the following changes to the registry:
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutdown, set to 1. I don't shut my machine down very often, but occasionally XP will increase the size of the pagefile if it absolutely needs to depending on circumstances. This forces it back to the size you want it when you restart.
HKLM\System\CurrentControlSet\Control\FileSystem\
HKLM\System\CurrentControlSet\Control\FileSystem\
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Contr
Set to 4096 if you've got more than 32M RAM
Set to 8192 if you've got more than 64M RAM
16384, 128M
32768, 160M
65536, 256M
131072, 512M
This changes the maximum number of bytes that can be locked for I/O operations. The default is 512Kb. While the above are the recommendations, I've found stepping down one level to provide the most performance for my needs, YMMV. (For example, I have 256M, but I set my IO limit to 32768.)
HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management\DisablePagingExecutive, Set to 1 to disable paging of the kernel.
There, that wasn't so hard, was it? For those who want to flame that statement, keep in mind, that the information above is easier to find than some of the tuning suggestions I've heard for Linux. I've used Linux for 10 years, and only today heard about
Re:God no... (Score:3, Insightful)
When I just searched for '/proc linux vm swap' in Google,
I can tell you one thing. I would rather poke around the
Re:God no... (Score:4, Interesting)
The above hacks aren't for users, they're for administers and geeks. The average user will boot their machine, do what they have to do, and shut it back down. For those who aren't users, we like to leave our machines on for months at a time and these tweaks will help with that. If you were doing tech support, then you'd know them. If you ARE doing tech support and don't know them, please consider another field. These are the basics...IT's already filled up with enough paper MCSEs who can't spell NT unless it's in the 6-week course.
When I just searched for '/proc linux vm swap' in Google,
No, when you know EXACTLY what you're looking for, it never is. Now, search for +linux +performance +tweaks, and tell me if it shows up. Didn't, did it? Now, search for +windows +performance +tweaks. How many of those pages DIDN'T list the tweaks I just gave? Not many.
I can tell you one thing. I would rather poke around the
Because the difference is...? One's a collection of key-value pairs organized in a virtual filesystem analogy and another is a collection of key-value pairs organized on a filesystem? Or, is it because MS puts a warning that if you don't know what you're doing, editing the registry can fuck your system, but the Linux developers fail to give you the same warning?
By the way, if you are not shutting your XP system down often, you must not be rebooting for the security patches, and that can be a problem for everyone.
Could be, but I keep my machines fairly secure to begin with, and few of the security patches issues by MS affect well locked-down machines. They're more for user's PCs, like yours. Also, the last few security updates I've done haven't required a reboot. Unlike the latest kernel updates...
claiming to release within hours versus the weeks they claim FOSS takes
Or, years. How long was that latest flaw in the kernel sources that took down the Debian servers? Years? I thought the "many eyes" theory said something like that wouldn't reach production as there's so many people reviewing the code. I'll give you a clue: just 'cause the code's available doesn't mean many more people outside the development team is looking at it. Most are doing
Re:God no... (Score:3, Insightful)
Re:God no... (Score:4, Interesting)
Maybe 2.4 does, but my 2.2 system has never suffered from this problem (haven't got around to upgrading it yet). Small amounts get swapped out, but nothing noticeable in interactive use.
"Why waste ram that could be used for live data?"
Because when I want to use my web browser again after playing a game for an hour, I don't want to have to sit there for two minutes watching it slowly swap back in... interactivity is far more important to me than small performance benefits from an extra 64MB of disk cache.
How fast is swapping really these days? (Score:2)
Back when P90s were the norm, was RAM access about as fast as disk access is today?
Re:How fast is swapping really these days? (Score:5, Informative)
Re:How fast is swapping really these days? (Score:3, Informative)
Re:How fast is swapping really these days? (Score:5, Insightful)
Re:How fast is swapping really these days? (Score:3, Informative)
It has awesome memory checking, cache profiling and heap profiling.
I'm just a satisfied user, i have no relationship with the Valgrind developers.
Re:How fast is swapping really these days? (Score:3, Interesting)
This reminds me of an old convo I had ... (Score:5, Interesting)
She had just procured a new Sun machine with 2 GB of RAM. Mind you, disk space hadn't grown all that significantly and you could still get machines with 9 GB drives.
The original practice was to make swap 2xRAM. So when the student she had putting the machine came to her and said, "What do I make swap?" she responded "Twice the RAM."
He said, "Are you sure? That's like almost half the boot drive."
She thought about it for a second and said, "Oh, yeah. I guess just make it the same as the RAM."
So this begs the questions: What do you make your swap now? When does your rule of thumb change? And remember when you could run a "fast" linux box on a P100 with 64MB of RAM and 128MB of swap?
How much swap? (Score:2)
I find that Linux just isn't that good at paging. I never use a significant portion of my 2GB swap partition, and memory contention is still high sometimes. Hmm... Maybe I do need to adjust the swapability number.
Re:This reminds me of an old convo I had ... (Score:2)
I then run a defrag program that moves the swap file to the inner tracks of the HD's.
Re:This reminds me of an old convo I had ... (Score:2)
Paging vs Swapping (Score:2)
That said, you might want to look into a recent Solaris Internals book or course, and also look into the history of things like priority_paging and page coloring
Re:This reminds me of an old convo I had ... (Score:5, Interesting)
When SunOS5 rolled around, this was no longer necessary, and your swap is additive, so you only need as much swap as, well, you actually need.
On my linux firewall system with 256MB real RAM, I have 512MB swap space. On my Windows system with 1GB real RAM, I have 768MB of swap space. This number is actually a hold-over from when I only had 512MB of RAM, I could probably decrease it to just about nothing now.
Amusingly enough my system has ~480MB of real RAM free, and is using 701MB of my paging file. Go windows! Like I need 480MB free all the time. Still, it is nice not to have to swap something out if I start a big application - but Windows is awful about returning from swap.
Some other more or less useless data points: My Indy (running gentoo) with 128MB has 256MB swap, which has been enough. I probably could have gotten away with 128MB but believe it or not my primary concern is whether I'll be able to compile some of the biggest C++ programs without the larger amount of swap. Certainly 128MB will not do it, even when you are booted from the gentoo installer CD and there's nothing much running.
Re:This reminds me of an old convo I had ... (Score:2)
Re:This reminds me of an old convo I had ... (Score:2)
I am planning to add more Indys to my stack and cluster 'em. A friend of mine has some R4600PC indys he's not using, and plans to give me a few of 'em, soon as I make a four hour drive to go pick them up.
Re:This reminds me of an old convo I had ... (Score:3, Insightful)
Re:This reminds me of an old convo I had ... (Score:5, Interesting)
Now that I have a newer machine, and RAM prices have increased (had to replace SDRAM with DDR), I only have 512 MB in my home machine. It seems to be nearly as responsive, practically never needing to touch the swap. I've only ever seen it use a few MB of the swapfile. When partitioning my Linux drives, I almost always have more than one drive in the machine. HDA1 normally gets the root partition. HDB1 is normally my swap, at the front 512 MB of the drive, followed by home on HDB2. This system makes everything snappy.
Even on my work machine, which is only a p3 450 with 256 MB of RAM, things operate quite well under Gnome 2. I have two drives in that machine as well, and the swap is on a seperate drive from the root partition. Programs can load from one drive while simultaneously swapping (if necessary) to a second drive. Even with Gnome 2 running, in addition to my browser and several other apps, only a few KB of space is being used on the swap.
I can't see most desktop Linux users needing more than 512 MB of swapfile space, assuming that they have at least 256 MB of RAM. The general rule of thumb, though, is to put the swap partition at the front of the drive for the best performance, in the event that it does need to get used.
I've really been impressed with Linux's memory management, even in the 2.2/2.4 series kernels. I've heard that 2.6 even makes some improvements as well. When I used Windows 2000, on the other hand, it INSISTED on using the swap even with a gig of RAM, even after I tweaked it for the best performance. I even used a RAID0 array, and Linux is still faster and more efficient at managing memory WITHOUT the RAID array. I was surprised that the array wasn't even really needed on Linux for fantastic disk access speeds with my 3 year old 7200 RPM drives.
Of course, the rules will be different for server application. More swap is probably a necessary thing. It's possible, however, that users of Linux (on the desktop) may not even need a swapfile with more than 512 MB of RAM.
Re:This reminds me of an old convo I had ... (Score:3, Interesting)
Re:This reminds me of an old convo I had ... (Score:5, Interesting)
First you should worry about how your O/S does "memory overcommit".
Many O/Ses overcommit mem. How they handle the case when it turns out there really isn't any mem left (including swap) is what you'd want to know. Some O/Ses (and versions of O/S) effectively kill -9 random processes till there's enough RAM to run. Some applications intentionally allocate large amounts of mem and usually don't every use them. So they usually won't work if you have overcommit turned off (and not enough RAM+swap).
If you having tons of swap just to avoid your O/S poor handling of mem overcommit, you may end up in a death spiral of swapping. Running processes page by page off your HDD isn't fun to watch (it's so 50s or was that 60s
My HDD transfers at max 40-50MB/sec, random seek transfer maybe about 11MB/sec.
At worst case how long does it take to swap out and swap in the largest process you'd ever have, given the speed of the HDD? Can you wait that long? Can the app wait that long? Will the machine be dead for practical purposes?
So if you can wait 20 secs, maybe 512MB is ok, assuming the pig process only uses half or so of your swap (plus whatever physical RAM you have).
But with a small swap, you may run out of mem and hit the memory overcommit scenario.
I'd still keep swap - just so that when my machine runs out of mem starts slowing down, rather than slamming full speed into a hard wall.
Re:This reminds me of an old convo I had ... (Score:3, Interesting)
Yes, that would be the day before I "upgraded" from Red Hat Linux 6.2 to RHL 9. (P200 64MB, swap partition on a separate HD) I use fvwm and I don't expect mozilla to be fast, but it really sucks when it takes several seconds to get the menu to pop up on an xterm. I have a pathetic fantasy that I will upgrade to a 2.6 kernel and my system will work as well as it did 5 years ago (and that I will get an X ser
Re:This reminds me of an old convo I had ... (Score:2)
Funny though - I've worked on SunOS and Linux since the early 90's - and comparing Linux/x86 to Solaris/sparc to me is often like comparing Harry Potter to Tolkein - at least it's a similar genre
--
"the primary stupidity is that of arrogance"
Re:This reminds me of an old convo I had ... (Score:3, Funny)
Problem (Score:5, Interesting)
No it isn't possible. With today's RAM prices I almost always have more physical RAM than the system requires. But, due to aggressive VM swapping there are still hundreds of megs swapped out to disk when there is no need at all. This means that those applications, when their time does finally come, are slow because they must be retrieved from disk first. It's really annoying sometimes. Yet, even with excess RAM turning off swap is disasterous.
Re:Problem (Score:5, Interesting)
No, turning off swap is not disastrous. We've turned it off on our production web server cluster that routinely serves 60Mb sustained traffic. We've turned it off because we have 2GB of ram in these machines, and Linux insisted on preferring buffers and cache over our running applications. Fuck that, we said. With over 1GB Of buffers and cache, we had RAM to spare; bye-bye swap.
Re:Problem (Score:2)
Yet, even with excess RAM turning off swap is disasterous.
I find that swap partitions in Linux and FreeBSD are just a nuisance once you've got enough RAM for your apps. Swap files are preferable because you can change the size and number of the files after installation. Swap partitions are just wasting valuable space on your HDD.
I have 1Gb of RAM on my laptop and Linux, FreeBSD, Windows 98 SE and Windows XP all run fine without any swap partitions or files on my quadruple boot.
The virtual memory alg
Re:Problem (Score:2)
Re:Problem (Score:2)
Don't swap until it's necessary seems the right thing to do. If IO isn't busy, you could send older data to disk, but you'd need a
Re: (Score:2)
Memory (Score:2)
No, it isn't really. Unless you don't use your computer.
In some cases it makes sense to use your physical memory as disk cache rather than for unused applications.
Swap out that sshd, and give the database server more memory. Swap out that screensaver and email client, give quake more.
2Gb of RAM, 300Mb of apps running. (Score:2)
The big issue (Score:5, Interesting)
I think developers could do more at a library level. For example.....dare I suggest using common sub libraries within libraries, that is people like KDE and GTK get thier heads together and say "are thier functions we include in our libraies that could just as well be linked to an underlying library?"
Re:The big issue (Score:3, Informative)
Well, you see, KDE is written in C++. GTK is C. C++ stuff does not play well with different version of the same compiler let alone different compilers or even different languages.
In theory you're only "supposed" to use either GNOME or KDE and therefore only have one set of libr
Re:The big issue (Score:3, Informative)
It doesn't work that way (Score:5, Informative)
I keep my memory usage much below the total ram on the servers, but in real life, the machine still swaps. This is because even tho the machine NEVER needs more ram than is available at any given time, over a period of days, it will use more than the available ram. It caches out the old data that was used 12 hours ago.
Unless you reboot every day (as in a client machine) you will use swap on just about any machine. Using swap is not bad. Using swap for a currently running application is not so good. This isn't a bug, its a feature. Reading data from swap after it has been accessed is still faster than reading new data from the drives, especially if its a network drive.
Re:It doesn't work that way (Score:2)
Using swap is not bad.
No it isn't, but constantly swapping a lot things in and out is, and you'll notice a considerable slow down of your machine.
And that's when you need to consider buying more ram.
What's wrong with many resident pages? (Score:4, Informative)
Why not? BloatyApp, if it's that bloaty is probably an object oriented program with template instantiation (or is by Micro$oft); these programs are notoriously huge, but also have notoriously poor locality of reference. The user will get better perceived response if you can keep more of BloatyApp resident.
If there's space in memory, I don't see the point of pre-emptively ejecting as many LRU pages of BloatyApp. (Of course, I haven't RTFA, but this is
Bloaty apps? Are you kidding me? (Score:5, Insightful)
Ah yes. It's all the fault of bloaty apps. Apps like database daemons and high-traffic httpd daemons. We've turned swapping off on our servers because we were sick of seeing almost a GB of cache/buffer memory, while it was swapping 500MB of shit to disk. Want a bloaty app? How about the linux Kernel? I love the thing, but Jesus Tapdancing Christ it would rather swap our starting DB process to disk, than free up the fucking buffers and cache. Is there something wrong with wanting it to give precedence to not swapping?
Re:Bloaty apps? Are you kidding me? (Score:2, Insightful)
Re:Bloaty apps? Are you kidding me? (Score:2)
Re:Bloaty apps? Are you kidding me? (Score:2)
Re:Bloaty apps? Are you kidding me? (Score:3, Interesting)
Your server apparently believed that it was accessing that cache and buffer more often than that half gig of random pages. Do you have real reason to believe that it was wrong, or does that just "seem" bad?
In other words, do you have actual numbers to demonstrate that your kernel was making poor decisions, or are you only fairly sure that it was?
Swapless since 1997 (Score:4, Insightful)
With read-only & demand code-page loading and copy-on-write even bloatware really doesn't eat memory. And bloatware has to be frequently restarted to recover the memory it leaks.
Sure, there are some jobs that needs swap -- lots of seldom used memory pages.
But not mine. I prefer to save myself the complexity and performance headaches.
VM you say? (Score:3, Insightful)
Or is it just the Virtual "M"?
Re:VM you say? (Score:2)
Its the change in meaning of UML that I can't get my head around these days...
Other reasons (Score:5, Interesting)
At least, that's the rationale I've read behind OS X's strategy of swapping things out long before all physical memory is used (and of keeping a pool of zeroed memory pages ready to fulfill most requests). Note that this does not require superfluous swap-ins if your reuse strategy is balanced properly, as the fact that something is swapped out doesn't mean that the memory which contained that data will be cleared/reused immediately (i.e., if it's needed again shortly afterwards, that page can be reactivated without having to go to disk).
Under most desktop OS'es, programs can even give some hints to the system regarding their usage of a memory region using e.g. the madvise() system call.
My vote.... (Score:4, Informative)
Re:Other reasons (Score:3, Insightful)
That's what FreeBSD's been doing for years, and for a long time kernel hackers spoke in awe of the much-vaunted FreeBSD VMM. Now that Linux has implemented a similar strategy, everyone's freaking out like it's some new ego trip that noone's ever tried before.
The "new" system is what other OSes have be
Windows already has this (Score:2, Funny)
RTFA - swappiness is tweakable (Score:2)
Re:RTFA - swappiness is tweakable (Score:2)
Return of the Sticky Bit (Score:2, Interesting)
In modern Unices (including Linux) last I heard, the sticky bit is ignored since everything is simply demand paged.
Could not sticky bit be revived with some similar meaning? As in, "don't be too keen on paging these out?"
The amount doesn't matters, it's the stickinness (Score:3, Interesting)
However, what I mind is the fact that the pages that are swapped out STAY there!
Why not aging the disk cache the same way the RAM pages are aged ? On an idle machine, the disk cache would gradualy decay and be replaced by the pages back from the swap, and the machine would be all responsive again.
It means that if the user leaves for lunch and a cron wants to eat all the disk, with some luck, when the user gets back, his machine is as responsive as it was when he left.
I have a laptop with 192Mb of ram, I always hate when 2/3 of the ram is "free" while it takes 10 seconds for the kmail window to move to the front. Even if the machine has been idle for hours.
I even regularly do a "swapoff -a;swapon" to claim back the cache!
Re:The amount doesn't matters, it's the stickinnes (Score:4, Interesting)
I know what you mean, but in this case, it seems like your machine is making a reasonable guess: you haven't used Kmail in hours, so the odds of you wanting to resume using it at any particular instant is pretty low. On the other hand, reading from a drive is quite a bit faster than writing, so the penalty for incorrectly swapping out old pages when the system is idle is significantly less than incorrectly not swapping out old pages before users launch giant processes that want to allocate a lot of RAM very quickly.
The kernel's page cache is the key... (Score:5, Insightful)
Without a swap file, the kernel has no place to stick memory segments that are rarely used. They stay in resident memory la-la land until the process is terminated. Those segments add up over time and erode the memory available to the page cache.
Page caches are wonderful. When you load an application (like Firefox [mozilla.org]), you're not just getting the web browser. You're firing up a large chain of shared objects/DLLs that support the widgets, I/O, and components of the application. All of these components must be read into memory anyhow for program operation, so the kernel tends to just leave it in there for future use (the page cache).
When you shutdown Firefox, you're also releasing the necessity of those libraries (provided nothing else is using them). Those libraries also remove themselves from memory. If you load another application (like Thunderbird [mozilla.org]) that uses the same type of libraries, the kernel will not have to go to disk in order to fetch those libraries. It will instead opt for the page cache contents.
Turning off the swap file in the historic era of VM infancy was the best way to remove the hard drive bottleneck from system. The operating systems of yester-year did not have good page cache schemes that took advantage of all that unused memory. It is a little different now.
Applications are so modularized that they are broken up into a billions of smaller libraries so that code can be shared. This increases memory efficiency by keeping a shared library resident for multiple processes. These libraries are frequently accessed, more often than many people realized. Getting THOSE into memory is better than making sure my 500+ Linux applications stay resident. Notice that on a web server with 1GB of RAM the Linux kernel is still putting things out to swap. These processes that stay asleep for long periods of time do not need to waste the memory that page cache is currently using (892309504 bytes or 753.7MB). What would be stored in that 753.7MB of memory? The database that drives the website (instead of having to seek the disk). The entire web page hierarchy used to display pages on the web site. All the scripts that are used to display dynamic content on the web site (etc. etc.)
Now, if we subtracted from the page cache the amount of memory that was stored in the swap file, we would have over 200MB less that we could keep cached in memory. That could be an entire database that the kernel would then waste needless CPU cycles to fetch from disk.
The only advantage to turning off a swap file on these modern machines would be for a machine that runs only a select few applications, and not having a lot of processes in the background doing things.
Re:The kernel's page cache is the key... (Score:3, Interesting)
> stick memory segments that are rarely used.
Anyone who runs Mozilla on Windows 2000 knows that if you minimize Mozilla for a half day, despite you having 756 MB RAM and not using more than 3-400 MB of it at any given time, bringing Mozilla back to the foreground takes anywhere from 2-6 seconds (depending on the speed of your disk), which is just idiotic on a 2 GHz home machine with that much RAM.
There is no reason what-so-ever that the OS should be s
What algorithm are they going to use (Score:4, Interesting)
AIX uses LRU today, so when you do a backup, the system tries to keep all filesystems in cache (well that what was read last !!), and will happily swap your apps out to disk in order to do so (with default tuning parameter).
I fondly remember the days when I was running Linux with no swap, none whatsoever...
Dumb Swapping is Computer Abuse (Score:5, Interesting)
Unfortunately the current crop of best guess VM managers end up denying the end user the experience of their computer's peak performance. Coupled with the horrible state of application bloat, modern 'state of the art' hardware and software combine to give us less and less in terms of overall performance. Software developers throw more code at the cpu to add functionality with little or no concern for performance. And hardware manufacturers add more and more 'special instructions' and 'pipelining' which the majority of software is completely unable to access. If anything it's more like a bunch of dysfunctional co-dependents than an industry that is cogent as to what really needs to be going on. If the folks dealing with processors and the application software could take a page from the gamers (look at the high levels of integration between game engines and video cards) for example, and more effort put into consolidating functionality in dlls and shared libraries; we would be amazed at how truly fast these machines could perform.
Copy vs Swap? (Score:2)
Not amused (Score:4, Interesting)
Some apps _can_ make the system unresponsive enough to ignore keystrokes, which is *very* annoying. At other times, xmms will stop playing while the disk goes crazy... Switching from emacs to Firefox after 10 minutes usually takes an extra 5 seconds to redraw the window and load all the stuff again.
Running GNOME2 on this laptop is also quite noisy on the disk. It swaps all the time...
Swapping on servers vs. Desktops... (Score:2, Interesting)
This point is useful, but only if free RAM is at a premium. For the most part, on servers, there will be sufficient RAM to support the on board applications, and the amount of free RAM remaining will be able to handle the variable load of a standard workday
Swappiness vs. buffering (Score:2)
It seems that the original thread was not about swapping in or out, but about the amount of cache that is used by the kernel.
I just have the same problem. I have 2G RAM, and I run my KDE desktop, some standard server programs and some UML instances.
When I create UML instances (eg. 8G image) then my memory gets full and is not easily reclaimed.
I agree with the philosophy of the buffers and the cache, to speed up IO operations for recently accessed files, but I do not agree with the time that they are i
Swapping back in. (Score:4, Insightful)
So way you want to do is:
So if the guy goes to leaving a big make running, it gradually pushed the big apps out while it runs. But if the big make completes, the apps start crawling slowly back in. If it hasn't finished when he comes back from lunch, he probably wants it to carry on running the make: since the CPU is at 100% load, he is probably not surprised it is sluggish.
Why swapping is _good_ (another article) (Score:3, Insightful)
describe why swapping is _good_.
Turning off swap rocks (Score:4, Interesting)
The 2.4 VM changes causing this behavior were awful, and it's too bad that I have to sacrifice a large (disk-based) physical address space, but I'm not going to put up with my applications being paged out when I have 4x as much RAM as code I'm running. Just allowing the system admin to put a limit on the size of the buffer cache would probably solve most of my problems, but instead I have to turn off swap. Too bad.
Keep two copies (Score:3, Insightful)
Another problem that many have noticed, and that isn't easy to deal with, is heavy diskaccess causing the cache to grow and stuff getting swapped out. Yes even some Linux versions suffer from this problem. A Red Hat 9 system I had running for months was really slow in the morning, because all the programs had been swapped out while cron jobs where running during the night. But you never know when it is a good idea to swap the stuff out and when it is not. When the disk access is going on, the process page might not have been used for hours. But still you might want it to be kept in RAM. File pages that have been accessed just once shouldn't be kept in cache for long time. But of course you shouldn't remove them unless the memory was needed for something else. Removing the pages too early is also bad, because you wouldn't notice, that this was really a page that was going to be accessed frequently. Some people are fanatic, and don't want process pages to ever get swapped out to make room for cache. That isn't a good idea either. You can really have process pages that may not be needed even once, do you want such a page to be kept in ram for months just in case? And notice how disabling swap is not going to solve the problem. You still have to think about memory mapped files, that in many ways must be treated like anonymous mappings.
The answer is . . . "It depends" (Score:4, Interesting)
Swapping before necessary can be good (Score:5, Insightful)
I've seen a number of posts echoing this point, overlooking one of the key reasons for swapping. It's not just because you're out of memeory for applications, it's because sometimes there are better things to be doing with your memory. Mainstream operating systems use otherwise unused memory to cache disk access, dramatically speading things up. If you've got an process that hasn't been run for a a while it may actually be more efficient to swap it to disk. This frees up memory to cache data that may be being hit quite frequently. inetd hasn't been needed for a while? Swap it out so that your disk cache is larger, benefitting your heavily used web server.
To be fair, when to make that trade off is very tricky and will never work perfectly 100% of the time. Inevitably you'll occasionally be burned by a bad decision. But there are real benefits. The real question is not how to turn it off, the question is how to improve it and perhaps how to allow users to tune it for their needs.
adaptive algorithms (Score:3, Interesting)
- dirty marking unreferenced pages when swaped. if these mem pages are not used after the swap out, no need to swap them in again. i'm prety sure this already occurs
- for process using high swap demands, increase their weighted priority for pages, with a window-averaged for swaps. so then, my database process could hog under load while my less-used apps may swap because they're used less often. could be taylored differently for code versus data segs.
- page-impage comparisons to avoid holding duplicate code segment pages in memory. this plays with the concept of shared libs a bit, but could avoid duplicate pages, especially if this information is saved in a precalc'd hash table that is stored.
just ideas.
Swap vs. disk/file cache (Score:3, Interesting)
There is a simpler and more powerful scheme that unifies swapping and disk caches, while allowing applications to persist between reboots, all with better performance than current systems!
EROS implements [eros-os.org] such a system. Generally it is referred to as "Orthogonal persistence", and functionally it behaves as though the computer is "always on", and returns to the exact state it was in after a reboot. The thing is, with orthogonal persistence, the structure on the disk is not a file system, but just the application data.
Since applications no longer work with the disk explicitly (open/read/write) but only with one type of memory (persistent memory), the OS manages all of the disk I/O, and it allows it to eliminate almost completely the largest delay in disk-work - the seek time in all writes. Since all application memory is just mapped to disk transparently, all RAM is just considered a "disk cache", and the kernel does not have to make nasty tradeoffs between disk caches (of explicit open/read/write calls) and virtual memory.
Of course there is still a problem if large work-areas of unimportant applications "swap out" smaller areas of important applications. I suggest solving that by prioritizing pages to the memory manager. In a system like *nix it is not a problem. In more secure systems however (EROS, for instance), it may create additional covert channels between applications so it was avoided.
Swapping out unneeded stuff on bootup (Score:3, Interesting)
#!/usr/bin/perl -w
use strict;
my $a = "xxxxxxxxxx" x (131 *1024*1024);
This is just a quick hack, you may want to adjust the size to suit your memory size. The server from where this script was copied has 2GB of memory. Essentially I want to page out all the stuff that doesn't get used after starting the server and the related server processes. Of course, given enough time the server would swap out those pages anyway, but this method just does it quicker. After the script has been run, the server will gradually swap in those pages it really needs. OK, doing this may be pointless but I don't care
These guys are on crack... (Score:3, Insightful)
The harddrive is really, really, really, really fscking slow. In comparison Ram is really really really fast. As a result, you want to interact with the hard drive as little as you possibly can, and interact with ram instead as much as you possibly can (the only thing which beats that is interacting with only the cpu registers and avoiding ram and harddrive altogether).
As is, linux doesn't even begin touching the disk until there is only enough ram left to turn on VM. Now this has a negative impact when that limit is reached because there is overhead turning it on.. this impact is negligable and tweakable since you can wait and see if you hitting the limit, add more memory, see again and reevaluate until you simply aren't swapping. This is a good thing.
One of the worst things windows does is swap constantly. In fact beyond a certain point (read enough ram to run an XP desktop) the system swaps MORE if you have more ram. You boot the system with all uneeded services turned off and no startup processes and all the eyecandy turned off. And you've got 4gb of ram in the system, guess what, it's already using VM.
Maybe VM management itself could be tweaked more, but it certainly shouldn't be used unless it absolutely has to (and if you don't have enough ram and it has to all the time then it's not like you suffer that performance hit more than once).
The only exception to this I've found is a linux desktop running kde or gnome with about 256mb of ram, at that point the numbers seem to work out just about right(or wrong I should say) and the system is constantly turning VM on and off, encountering the performance hit again and again and again, with pretty much every operation you perform.
Sticky Bit!!! (Score:4, Interesting)
Re:they missed one of the biggest points! (Score:3, Insightful)
Re:they missed one of the biggest points! (Score:3, Interesting)
One time, I had a disk corruption in the swap partition. When I booted the machine, everything went well, until I started opening applications. The machine swapped out more and more data, until it reached the first bad sector in the swap. It crashed quite spectacular.
Once I figured out what happened, I replaced the disk.
That was in the days of the 2.0 kernel. My machine had 16 MB RAM IIRC.
Re:they missed one of the biggest points! (Score:2)
Re:they missed one of the biggest points! (Score:2)
That half million times is only true if you're accessing a single word from the page. It'll come down to about (500,000 / (PAGE_SIZE/MEM_ACCESS_SIZE)), or about 1000 for IA32 processors [*1], if you're accessing the entire page, as you'd need to do to perform a checksum. Still, it is just a drop in the ocean...
*1 - using 4K pages and 64 bit memory accesses. What size page does Linux use for swapping? larger pages would be more efficient...