Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Unix Linux Hardware IT

Can Swap Space Solve System Performance Issues? (utoronto.ca) 201

Earlier this week on the Linux kernel mailing list, Artem S. Tashkinov described a low-memory scenario where "the system will stall hard. You will barely be able to move the mouse pointer. Your disk LED will be flashing incessantly..."

"I'm afraid I have bad news for the people snickering at Linux here," wrote Chris Siebenmann, a sys-admin at the University of Toronto's CS lab. "If you're running without swap space, you can probably get any Unix to behave this way under memory pressure..." In the old days, this usually was not very much of an issue because system RAM was generally large compared to the size of programs and thus the amount of file-backed pages that were likely to be in memory. That's no longer the case today; modern large programs such as Firefox and its shared libraries can have significant amounts of file-backed code and data pages (in addition to their often large use of dynamically allocated memory, ie anonymous pages).
A production engineer (now on Facebook's Web Foundation team) wrote about experiencing similar issues years ago when another company had disabled swapping when they replaced or reinstalled machines -- leading to lots of pages from hosts that had to be dealt with. This week they wrote: I stand by my original position: have some swap. Not a lot. Just a little. Linux boxes just plain act weirdly without it. This is not permission to beat your machine silly in terms of memory allocation, either... If you allocate all of the RAM on the machine, you have screwed the kernel out of buffer cache it sorely needs. Back off.

Put another way, disk I/O that isn't brutally slow costs memory. Network I/O costs memory. All kinds of stuff costs memory. It's not JUST the RSS of your process. Other stuff you do needs space to operate. If you try to fill a 2 GB box with 2 GB of data, something's going to have a bad day! You have to leave room for the actual system to run or it's going to grind to a stop.

This discussion has been archived. No new comments can be posted.

Can Swap Space Solve System Performance Issues?

Comments Filter:
  • Meanwhile my Windows installation gradually increases in size to the point I wonder didn't Microsoft stop supporting 7?
  • What is Swap Space? (Score:3, Informative)

    by Androgynous Crowbart ( 6156226 ) on Saturday August 10, 2019 @12:44PM (#59074198)
    Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory. Swap space can be a dedicated swap partition (recommended), a swap file, or a combination of swap partitions and swap files. In years past, the recommended amount of swap space increased linearly with the amount of RAM in the system. But because the amount of memory in modern systems has increased into the hundreds of gigabytes, it is now recognized that the amount of swap space that a system needs is a function of the memory workload running on that system. However, given that swap space is usually designated at install time, and that it can be difficult to determine beforehand the memory workload of a system, we recommend determining system swap using the following table. That's some of what RedHat has to say about swap space.
    • And while memory has increased, so the size of storage has kept pace with that.... It's not a big deal to allocate swap space.
      Before there WAS Linux, I was taught to allocate swap space at least 50% of physical memory for *IX systems, depending on work load. Linux came along, it was *IX and so it got the same treatment. DOS had no ability to swap, part of the DOS inferiority. Windows got the ability to swap... But it did/does so without constraint.

      Yeah, Redhat has a hand-wavey suggestion for swap configu

      • by lgw ( 121541 ) on Saturday August 10, 2019 @02:03PM (#59074354) Journal

        Sawp space in the modern world should be nonsense for a desktop system. It's a concept from the 80s that needs to die. Kernel footprint is tiny compared to everything else these days, and thinking about page faults needlessly complicates kernel code.

        Paging stuff out only makes sense for a system with many users, most idle. That's a rare thing these days, but it does happen. Even then, however, it's usually once copy of the OS per user, each in a VM, on the same hypervisor. It it works much better to let the hypervisor deal with any swapping, as it can de-dup memory pages.

        • by MrL0G1C ( 867445 )

          Agree, there is some weird determination to keep swap space, it's like a religion, devoid of common sense. Back when I had 4MB memory it made sense to have some swap space but now that my computer does everything almost instantly with 16gb ram, I don't see why I'd want my computer to slow to an unusable crawl and start using very slow drives as memory.

          If your computer is actually using swap space then you've set it up wrong.

          Supposedly if you have 8gb then you should have 1.5gb swap and if you have 32gb then

        • by Doke ( 23992 )
          Wrong. There will always be little bits of starup code, or bits of libraries, that aren't useful after a daemon has started. It's better for everything if those useless sectors are swapped out to disk.
      • by rnturn ( 11092 )

        ``Before there WAS Linux, I was taught to allocate swap space at least 50% of physical memory for *IX systems, depending on work load.''

        I got used to needing 4X times the size of the physical RAM based on Oracle's recommendations. We might have been able to get away with less but the question always came up when one of the DBAs opened a ticket with Oracle and following their recommendation took away a reason to put you at the bottom of the stack. ("Call back after you've installed more swap space.") I don

        • That gets difficult when your server has 3TB of RAM, the RAM is larger than the local disk of 1TB. So what should I do? Surely at some point having swap is pointless.

        • I always managed with the size of ram as swap running Oracle on Solaris. The disk swap basically is never used, it's just there so that on DB startup there was always space to allocate against for the SGA SysV shared memory segments (which locked the lot only briefly when using DISM).
    • Redhat can go piss up a rope.

      They're right you should allocate swap at install time, but you can always add more. Linux can have multiple swap devices and files, with varying priorities.

      Swap is for letting low-memory systems do certain things at all, not for improving performance.

    • Swap space in Linux ...

      And Unix, Windows, ... and every other OS that supports it -- many of which predate Linux. Just sayin' ...

      • At least NT can grow and shrink as needed...

        But the rise of browsers sure killed everything. I don't know how we even lived in under 16mb of ram with a single processor like a 486 with a browser.

        • At least NT can grow and shrink as needed...

          Not precisely. You can grow swap files and add swap files on a running system as much as you like. To shrink swap files or delete them, though, you have to reboot.

  • Backwards Headline (Score:4, Insightful)

    by Kunedog ( 1033226 ) on Saturday August 10, 2019 @12:44PM (#59074204)
    Disabling Swap Space Causes Performance Issues

    Am I missing something?
    • Disabling Swap Space Causes Performance Issues

      Am I missing something?

      Exactly this; sometimes I have to remind myself that common sense is an oxymoron though.

      Thinking further: Really, WTF? You pretty much have to make this as a conscious stupid decision, this must be why the clue-by-four was invented.

      • My "swap" history goes back far enough that SunOS actually paged (though they called it a swap file) and you had to allocate (real RAM+virtual RAM) swap because memory was mapped to the file flatly. And I don't use swap any more on Linux or on Windows.

        If a process runs away from me, I want it to run into the OOM killer and die. I don't want to endure ten minutes of thrashing while the system swaps until it runs out of memory again, then still more thrashing as it swaps everything back into active memory.

        Swa

        • Even when I have stupid amounts of memory to throw at an issue, I always configure a swap; primarily because it's a useful diagnostic for finding problems before the end user does.
          (ie: if I see a VM server dip into swap; I can automatically turn off / move problem hosts without bothering anyone else, short of a small performance hiccup)

        • If a process runs away from me, I want it to run into the OOM killer and die.

          Isn't that a job for one of them fancy process resource limits?

      • by lkcl ( 517947 )

        Disabling Swap Space Causes Performance Issues

        Am I missing something?

        Exactly this; sometimes I have to remind myself that common sense is an oxymoron though.

        Thinking further: Really, WTF? You pretty much have to make this as a conscious stupid decision, this must be why the clue-by-four was invented.

        not quite. on both laptops that i've had SSDs (a 2012 macbook pro, and an Aorus X3v6 - both of them incredibly expensive machines) - allowing the SSD to be used for swap has resulted in load averages hitting 120 the moment it was used.

        that was with 16GB of swap allocated: it didn't actually matter how large it was - IMMEDIATELY that swap was used, by a gcc compile, or by chromium, or firefox, it did not matter what the application was: at the exact moment that swap was used, the symptoms described at the t

    • by Sique ( 173459 ) on Saturday August 10, 2019 @02:52PM (#59074464) Homepage
      Yes. You do.

      The problem is not performance issues per se, but modern applications reserving as much RAM as they can get, leaving nothing for kernel related operations, which then causes kernel operations to come to a screeching halt.

      • by Zero__Kelvin ( 151819 ) on Saturday August 10, 2019 @04:10PM (#59074584) Homepage
        As always, the problem stems from people knowing enough to be dangerous. Processes only over-allocate because the system configuration lets them. The issue is easily solved by a basic understanding of system administration. Don't want your favorite but memory hogging program to use too many resources? Use limits and/or control groups.

        The kernel only knows what you tell it when it comes to running programs. limits.conf [thegeekdiary.com] which affords user and group level granularity, so for example make firefox, chrome, and oprea part of the "browser" group and limit the browser group accordingly. Control its priority, memory limit, stack size, number of processes it can spawn, etc. Need finer granularity? Use control groups to do everything, including telling Linux which CPUs a program has allocated to run them.

        Do you want to know why the Linux community has a reputation for "blaming the user."? Two reasons. It is usually a system administrator error, and said error is usually made by people who don't know the difference between a user and a system administrator.
        • by Sique ( 173459 ) on Saturday August 10, 2019 @05:39PM (#59074746) Homepage
          You are missing the point. Yes, you can find system administrative workarounds around many problems. But you shouldn't have to in most cases. Linux once was known for reacting smoothly even during abuse, for instance handling fork() bombs rather gracefully. But an operating system running into issues because applications reserve too much memory has a huge problem, as it is one of the primary tasks of an operating system to administer computer resources including the memory. If the OS does hand out memory without reserving the pages it needs to continue working, that's a core OS issue, not an system administrative one.
          • Linux once was known for reacting smoothly even during abuse, for instance handling fork() bombs rather gracefully.

            Hmm. That must have been back before I started using Linux. So, I guess this fabled time was sometime between 1991 and 1993?

          • by guruevi ( 827432 )

            The kernel doesn't run out of memory, the problem is that you when you run out of memory, your system is going to hang until it clears some up, at least in Linux there is an OOM killer for that but it doesn't always work perfectly and often systems or even sysadmins will simply have the system auto-restart the offending application without fixing the problem.

            One of the other issues that happens is that the kernel frees up space first in the various caches, so your disk read cache especially becomes smaller

        • Sorry but if a *user* needs to do this then something is fundamentally broken. It's great that this ability to configure everything is present, but quite frankly if the system doesn't just work (or in the case of miss-configuration actively warn the user of a borked config) then that is most definitely the system at fault.

  • by BrendaEM ( 871664 ) on Saturday August 10, 2019 @12:46PM (#59074206) Homepage
    I am not for the unnecessary wear and tear on SSDs. In too many computers SSDs are soldered into the motherboard and cannot be replaced. I am also against crazy log-filing, and keeping needless cookies. Just so you know, as a content creator, I have worn out more than 10% of a 1TB Samsung 950's endurance. Even the small writes wear a whole block.
    • Bad advice (Score:5, Informative)

      by ttfkam ( 37064 ) on Saturday August 10, 2019 @01:14PM (#59074254) Homepage Journal

      Do not believe this. It's an example of something intuitively true but demonstrably false when the hypothesis is actually tested.

      https://techreport.com/review/... [techreport.com]

      These are drives that were reading/writing at the SATA3 bus maximum throughput (~540MB/sec) 24/7 for 18 months straightâ"far beyond what any normal user and even most servers would endure.

      By the time your SSD starts to wear out due to cookies (?!) and swap space, a replacement with double the size will be available for less than half the price. And by that time, you'll probably want to upgrade your whole system to whatever replaces PCI-E 3.0 x4 NVME drives and DDR4 RAM.

      It's a non-issue. Empirically tested non-issue to be precise.

      • by Calydor ( 739835 )

        Okay, I have a curious question as someone who has only the very most passing idea of how these things on a physical level. Is the wear and tear of a constant maxed out performance indicative of the wear and tear under stop-and-go conditions like normal usage?

        • by guruevi ( 827432 )

          There is no such thing in electronics as stop-and-go conditions. It's pretty much binary, it's on or off. If it's not being written to, it doesn't wear out. So for most people this doesn't matter.

          We use Samsung 840 SATA and now 950 Pro NVMe as SSD caches in a large storage array (200+ TB, 200+ users) and these things don't wear out in the 7 and 4 years they're in play respectively and we write ~2-5TB every single day. We still have the earliest Intel 32GB SLC and Intel 160GB MLC's somewhere in a server serv

      • Re: (Score:3, Insightful)

        The review you're referring to tested a situation when the entire drive was free which is OK when you overwrite it many times over.

        Now imagine a real computer which already has a lot of data. Now imaging only little space is left. In this case disk writes will be in the same region and will decrease its life span a lot faster, in many cases thousands times faster if your SSD drive doesn't have reserved space [google.com] (many don't).

        Also, SSD disks tend to die without any warning signs and data on them is usually not r

      • something intuitively true but demonstrably false when the hypothesis is actually tested.

        The phrase you are looking for is a specious argument.

      • SSD's are expensive. I am very much against soldering of Flash memory onto motherboards because invariably, it makes the entire computer disposable. Having shot hundreds of hours of video, and keeping a few dozen, I've shot enough video with my cellphone that I suspect the flash integrity might have been a cause of failure for at least one of my cellphones. RAM is not cheap, but the prices are going down. There is little valid reason to use flash memory in such a temporary way. In all my Windows computers
      • You and they are forgetting that even a write of a few bytes will wear an entire block of flash. When I stated that I had worn over 10% of a SSD, I meant that as a conservative number.

        With the passing of Moores' Law, with the diminishing returns on fab investment, flash memory failures will join electrolytic-capacitor failures in being a major contributor for computer failures.

        If you cannot imagine a SSD wearing out, you aren't using your computer for much.

        • Writes are cached by the controller. That's why you have to tell the OS that you want to remove -(eject) USB memory sticks, as one example of flash storage. Any ssd controller that doesn't cache writes is broken by design.

          This also allows the controller to move a block that's questionable to another block and mark the original block as bad, without having to tell the OS not to do anything while it does this.

        • When I stated that I had worn over 10% of a SSD, I meant that as a conservative number.

          How did you determine that you had done this?

        • If you cannot imagine a SSD wearing out, you aren't using your computer for much.

          Your imagination is running wild. A common use case of SSDs is caching small writes which is a load that is many orders of magnitude larger than just having a page file exist on a normal OS.

      • My data point. I tossed a SSD into a computer I use to record security camera footage because the HDD was having problems keeping up with writing multiple simultaneous video streams. The $80 I paid for the SSD was a trivial expense for the business, so if the drive died I could simply replace it. It was a 250 GB Samsung 850 EVO rated for 75 TBW. Last I checked, it was over 200 TBW and still performing fine, still reporting good health.

        I'm planning to swap it for a 1 TB SSD since prices have come down
      • by lkcl ( 517947 )

        And by that time, you'll probably want to upgrade your whole system to whatever replaces PCI-E 3.0 x4 NVME drives and DDR4 RAM.

        It's a non-issue.

        no it isn't. i bought an extremely expensive Aorus X3v6 about 2 years ago, with a 2500mbyte/sec NVMe SSD and 16 GB of DDR4 2400mhz.

        i was shocked to find that, just as in the top of the article, when swap-space was activated, even the slightest bit, the system would become so unresponsive so quickly that i had about 4 seconds in which to scramble to kill the web browser (killall -9 firefox-esr) or other processes before the loadavg would hit 120 and require a reboot to fix.

        months of research and attempted so

    • The use zram (its a ramfs that allows for compression)

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      Because this uses some ram and gives you more swap space (compression) if you were to use this for a swap you wouldn't be writing to the NAND of your SSD or nvme its virtual with a little overhead on the cpu which in most cases is fine, because you are never using 100% of the cpu at every moment. This has worked well for me on an eeepc and acer transformer series of tablets (note dont get one, CherryTrail sucks).

    • Just so you know, as a content creator, I have worn out more than 10% of a 1TB Samsung 950's endurance.

      What's your point? At the office, we have worn out 100% of the same model of SSD's endurance. I suspect many people have done the same.

  • Keep fuel in your car's tank.

    Seriously, how is this news?

  • Modern MacOS is memory obese and when it goes into swap its 5400rpm drive can’t keep up. Now that Apple solders ram “just add more” doesn’t work.
  • by YuppieScum ( 1096 ) on Saturday August 10, 2019 @01:30PM (#59074280) Journal

    Back in the day, RAM was expensive, machines didn't have much so the OS would swap out older/unused data to disk, which was slow.

    Then RAM started to get cheaper, machines had more, but the OS would use swap anyway. If you had enough RAM, you could disable swap and your machine would work faster.

    The actual problem is that there is now a presumption that all machines will have loads of RAM, and so applications and the OS are so bloated when they run that the OS needs swap space again... to the detriment of flash storage.

    • by dissy ( 172727 ) on Saturday August 10, 2019 @04:39PM (#59074650)

      and so applications and the OS are so bloated when they run that the OS needs swap space again... to the detriment of flash storage.

      Flash with it's limited write cycles really hates swap.
      That's why one of the first things I do is move my swap partition over to a RAM disk. That way it can page software out of swap lickety split!

    • by davecb ( 6526 )

      As Chris notes, anonymous pages that have no backing-store can't be written to data files or be refreshed from the applications that's running, so they have to just sit there, using up space and getting in the way, even if they aren't being used for anything.

      Having swap space doesn't make the system slower, unless you've already run out of space. In that case you have the choice of either running slowly or "thrashing", which is slower still.

    • If you had enough RAM, you could disable swap and your machine would work faster.

      That speed increase wasn't anything to do with how much RAM you used, but rather the lack of intelligence on an OS level for how to use swap space effectively.

      to the detriment of flash storage.

      Your flash storage will be just fine. If manage to wear out your storage simply through your page file then I'm sure someone has an award waiting for you. A common use case for SSDs is write and small file cashing on file servers, a load that is many orders of magnitude more aggressive than just having a page file on your computer, and they run just f

    • Welcome to 1997 aka the rise of Java. We used Solaris primarily for its 64bit support so we could buy an eye watering amount of ram (8gb) so stupid Java could easily gobble up 6+gb.

      The war never ended, poor choices of garbage collected languages put us in this place.

  • vm.swappiness = 0

    Not sure why this is hard to figure out.

    Disclaimer : make sure you have enough memory

    • There are a lot of configurations you can set, including the OOM Killer. A good way to handle it is to set these two options:
      vm.overcommit_memory=2
      vm.overcommit_ratio=100
  • Never use swap with Cassandra, though. It's preferable for the node to become unresponsive or crash instead of running slow due to swapping. One if the very few exceptions.

    • by geggam ( 777689 )

      I would submit that any server in HA configuration is better off crashing than swapping.

  • Back in the bad old days of slow laptop HDDs, I used to use compcache, now known as zram [wikipedia.org]. It was fairly simple to set up, and for a while it did seem to make Linux behave a little better/faster than having no cache at all. Then I got a laptop with 16 gigs of ram and an SSD and promptly let this slide, as it no longer seemed to make much difference and I also didn't feel like risking a bug prematurely wearing out my SSD.

    (Also, although I'm sure it's simple enough I didn't know how to manually set up encry
    • The author is wrong. Just look at this quote

      . If you try to fill a 2 GB box with 2 GB of data,

      1998 called and wants it's computer back. You will be hard pressed to buy anything with less than 8 gigs of ram nowadays. 16 is increasingly available as "normal". You want a machine with 2 gigs, you'll have to go to the Sally Anne or a flea market.

  • by gweihir ( 88907 )

    Swap space is not for solving performance problems. It is for providing more memory than is there and it will always _decrease_ performance. If the memory that is simulated by swap is actually used, performance decrease can be catastrophic.

    • by markdavis ( 642305 ) on Saturday August 10, 2019 @02:28PM (#59074398)

      >"Swap space is not for solving performance problems."

      Correct

      >" It is for providing more memory than is there and it will always _decrease_ performance."

      That is not correct. Swap is used as virtual memory when it needs more RAM for applications, but it is ALSO used to increase performance in cases where RAM is low... The kernel will actively swap out unused (/least used) stuff to create more RAM available for I/O buffering, which can greatly speed up the system. This assumes there are things it can swap out that haven't been used for a long time and not actively needed, however. If it gets to the point it needs stuff too often that is swapped out, thrashing will begin and system performance will become horrible.

      My point is, there are real-world cases where using RAM for disk caching rather than holding onto pages that haven't been used in hours or days, will be a great performance maintainer. There are lots of tuning variables to adjust swap behavior.

      • by gweihir ( 88907 )

        Well, on the second point it depends what you call "decrease performance". If you do not have the RAM, you could say that the resulting termination of the problem is the ultimate decrease in performance and then you are right that swapping does do better in comparison. But compared to using RAM, swap always decreases performance. It is a matter of the comparison you use. That said, I do agree to your explanation.

      • Is there any case where increasing the amount of RAM by the size available to swap and then reducing the swap to zero will not result in improved performance?

        If not, any performance issues are due to lack of RAM, not lack of swap.

        • >"Is there any case where increasing the amount of RAM by the size available to swap and then reducing the swap to zero will not result in improved performance?"

          Yes. The case where you can't increase RAM because the machine is maxed out (I still have lots of machines that are limited to 4GB or 16GB), or the machine has fixed RAM size (like Raspberry, or phones), or you can't get compatible RAM anymore.

        • by cas2000 ( 148703 )

          Yes. When it wastes RAM on idle data or code.

          Swap effectively increases the amount of RAM available for useful work by not wasting it on idle crap. i.e. it helps to make efficient use of a scarce resource.

          Sometimes that "useful work" is running other programs, or processing more/other data. Sometimes it is caching disk or network I/O.

          If stuff is getting regularly swapped in *and* out of RAM (i.e. thrashing) then that indicates a problem which can only be solved by either adding more RAM or reducing the w

          • I was very clear: "increased performance", not "reduced cost".

            My point is that, if you ignore cost, RAM always performs better than the same total memory that is implemented as RAM + swap.

            Swap is a cost reduction, not a performance enhancement.

        • Maybe there is, but it should still be slower than increasing the amount of RAM by the same amount and then *keeping* the original swap space, since your swappy system can technically do anything your swapless can but not vice versa.
    • Agreed
    • Swap space is not for solving performance problems. It is for providing more memory than is there and it will always _decrease_ performance. If the memory that is simulated by swap is actually used, performance decrease can be catastrophic.

      This is wrong, at least for any system that can use memory as a cache to improve performance, which is almost every system that can swap memory out?

      Swapping "cold" anonymous pages out of RAM to make more room for a "hot" cache is a performance win. The optimal result would be minimal time spent waiting on cache misses and swap-ins. Why have repeated cache misses to the same block if you had process memory you weren't even using in that time?

      Read up on vm.swappiness for Linux's method, which is pretty basi

  • by Anonymous Force ( 6156298 ) on Saturday August 10, 2019 @02:09PM (#59074372)

    > If you're running without swap space, you can probably get any Unix to behave this way under memory pressure...

    That's not true and multiple FreeBSD/MacOS (Unix kernel based) users have confirmed that. Also, "probably" in this statement makes it vague and utterly inconsequential. You can use "probably" with too many things in the world like, "probably it may snow in Costa Rica tomorrow" except it most likely will never happen but theoretically it can.

    The author of this blog post for some reasons hasn't bothered to read the discussion on LKML, as well as discussions on slashdot, reddit and Hacker News. Meanwhile multiple Linux kernel developers (including the ones who work on the memory subsystem) on LKML have admitted that:

    1) The issue raised by Artem S. Tashkinov is real and easily reproducible, and more importantly it's been like that for many many years.
    2) You can get absolutely the same issue with SWAP enabled (since in Linux it's usually not dynamic)
    3) Once you hit this issue with SWAP enabled, it will be even worse
    4) The issue is not about SWAP at all, its about OOMkiller not doing its job due to a flawed logic
    5) Currently the Linux kernel cannot handle this situation and in order to solve the issue once you hit it you can a) use userspace free RAM watchers like earlyoom ( https://github.com/rfjakob/ear... [github.com]) b) press SysRq + F/REISUB 3) limit application's RAM allocations using limits.conf or `systemd-run --user --scope -p MemoryLimit=1G`

    Yes, SWAP might improve performance (in a different situation altogether, not when you're running of your virtual memory) but it can also degrade it because the kernel doesn't know which applications should be kept in memory and which must be swapped out. No, SWAP does not solve low memory pressure stall, it only exacerbates it.

    And one last point: there's a myth that you must always have SWAP enabled. No, if you have enough RAM for running apps and system caches and there's some left, you don't really need to enable it. Also, enabling it on SSD disks will shorten their lifespan.

    "You must always have SWAP enabled" myth was born out of necessity (and I cannot stress it enough) - nowadays you can decide whether to enable it on a case-by-case basis. Various operating systems enable it by default not because Linux/Microsoft/Apple/Oracle/whatever programmers are absolutely certain SWAP is always required, they do it because they don't know your memory requirements and available RAM in advance. In 2019 if your laptop/workstation contains 8gigs of RAM and all you do is light web browsing and editing documents/spreadsheets, you will do just fine without ever enabling SWAP.

    I have over a hundred high load servers under my command, none of them have SWAP enabled and everything works smoothly. What's more some of them slow down to a crawl if you enable SWAP (even with `swappiness` turned down to 0). But certainly this system administrator knows better ... myths. // b.

  • Why are anonymous comments disabled in this topic? I've created a user account just to leave a comment because I don't want to reveal my real identity on /.

    • ./ is ending Anon Coward posting due to the amount of completely irrelevant drivel posted as comments on stories such as stuff about Trump, the KKK etc.
      • by HiThere ( 15173 )

        That is a truly lousy response to a real problem. A much better answer would be to allow the user to run a (set of) regex filter(s) that set articles which matched the filter to -2 in their view.

        • Comment removed based on user account deletion
          • by guruevi ( 827432 )

            The problem is that there were people/bots upvoting each other and downvoting real people.

            Same problem they have on Twitter and Facebook, the bots simply vote for each other and unless you can detect a network of them, they make it a pain for the rest of us.

            • It isn't actually that hard to detect a network of them when their actions are all on the same site.

              If slashdot decides to care, it is easy to reduce it substantially.

              They could even require more karma before allowing moderation, and things like that. A lot of sites do that sort of thing.

  • What a surprise that thing designed into the OS for over quarter of a century to give the system somewhere to offload RAM when it gets full so the system doesn't lock up is the solution to the problem. Just how fucking stupid are admins nowadays?
  • This is kind of dumb.

    Any major intersection will grind to halt if the traffic levels exceed local capacity. But there are different calibres of grinding to a halt as we all know from direct observation.

    There's the guy who is too tentative, and could have managed to pass through the intersection before the light changed, but wasn't willing to be sufficiently assertive. Probably because he didn't want to be the second grade of asshole—that asshole begin the guy behind the guy who ended up blocking the c

  • After seldom reboots, the first steps on my 10 year old Thinkpad X1 are the following...

    1. Swap on (through GParted) 2. sudo sysctl vm.swappiness=90 3. sudo journalctl --rotate 4. sudo journalctl --vacuum-time=1s

    Without SWAP the machine freezes and unusable in ten mins.
  • One would think providing now "swap" file will prevent swapping in the VM. But that if far from the truth.

    The Linux kernel will page executables on their original disk images. So if memory runs low, it would just unload the pages, to read the binary next time it is used. In fact a very good strategy is loading zero bytes when executing a new binary, and let the VMM figure out how to load it as needed.

    That is not only for read only data either. Programs working with large data would just mmap a file, and let

  • Why don't OSes reserve some RAM for OS operations? If a certain proportion of RAM is always off limits except to specified OS functions, then a system might not lose responsiveness (which is the key issue on a desktop).

    Then again, I am not an OS designer / coder, so what do I know?

  • If this were true, and there's some weird quirk in kernel memory management code that *requires* swap, then does this *actually* mean that I get better performance in low-memory conditions if I allocate e.g. 4 GB of my 16 GB on my workstation to RAM disk and then designate that for swap at every boot?

    I have run into the issue a few times if I do some memory-intensive compile with multiple parallel threads (make -jX where X is a bit too large) while also using desktop applications, and I'd prefer the system

    • by ls671 ( 1122017 )

      Just read the linked blog article to understand how this works and use a zram disk if you want to swap to ram. You should be able to swap about 6 to 10 GB on a 4 GB zram disk depending on memory content.

  • When my system sucks, I swap it for a better one.

  • A system with swap is sometimes tempted to swap out programs for unimportant data.

    A simple example:

    I have a desktop machine, with various programs open. Instead of sleeping the box when I go to bed, I let it download torrents (a slow process that a normal hard disk can keep pace with easily). Some systems then decide to swap out *everything* overnight and put as much as possible into cache (even though the i/o speed is restricted by the internet speed, which a hard disk can easily keep pace with).

    Come mor

  • Disable swap on all systems.
    Swap is evil.
    There are 2 main - VERY IMPORTANT - things to consider:
    1) All unused RAM will be used for filesystem caching. This means that whenever something is read or written from disk (or flash based medium) it is read into RAM to be presented to the application needing it. Linux will not discard this information because there is free RAM anyway. This means that if it needs to be read again, it will be immediately available without the slow disk (yes even from your 'fast' flash based medium it's still very slow compared to RAM). Having enough RAM to be useable for filesystem caching is critical.
    2) SWAP is used to migrate very inactive memory pages to the (very slow) swap partition or file (depending on kernel swappiness level - and yes you can create swap files instead of paritions, much more practical than partitions). It could be useful if applications 'loose memory': write to it but forget to free it. In this way, the system has more free RAM for filesystem caching and performance can increase.
    The fact is however, that the amount of 'lost memory' in most situations (especially on servers or embedded systems) is not a lot compared to the total amount of RAM. Sadly, some distributions allocate a lot of swap space in a paritition. In such a case, when certain applications consume more and more memory, the RAM part for filesystem caching drops to a very low level, and active memory pages get swapped in and out of the SWAPspace.
    It is a lot better that the appication in question would get killed when the system runs out of RAM with minimal filesystem caching; through the OOM killer (inside the kernel), than ending up in an active swapping situation.
    Android kills background applications before the Linux OOM killer kicks in, on smartphones, and the killer in android is configurable to ensure a minimal amount of filesystem caching remains active. Linux Desktop systems could implement a similar killer to prevent Chrome from starting its own swap-show, but swap is certainly not the answer; on the contrary; performance degrades already before the system starts to swap actively, as file system caching has been reduced beyond 'comfortable'.
    Instread of focusing on swap, one should focus on monitoring the filesystem cache size, and alert/take action whenever this goes below about 1Gb of RAM already on desktop and server systems.
    Most systems are better off without swap all-together and it is mind-boggling that certain linux 'server distributions' still advice swap space in 2019.
    • so you're not a professional sys admin, that's an ignorant point of view. wrong. Not acceptable to have apps "killed" because someone followed your stupid advice.

    • by ls671 ( 1122017 )

      Exactly this my friend. On servers, in order to over-commit VMs memory as much as possible, I use both zram and slow spinning disk swap partitions. I turn off zram (swapoff) between 2 AM and 4 AM so memory leaks get a chance to get written to spinning disk swap. When zram is on, it has a higher priority than the spinning disks so zram is used during normal operation time.

      An important part you mentioned is; memory leaks. On slow spinning disk is where they belong.

      • by cas2000 ( 148703 )

        or you could use zswap [wikipedia.org] which does roughly the same thing without requiring you to get cron to turn it off and on again every night.

  • As I understand it, Linux will copy pages that aren't often accessed to swap long before it needs the space in ram. And there's plenty of memory that is not often used, it might be infrequently running services or program that have left libraries and data in memory that they no longer need, or that they potentially need.

    So when the time comes it doesn't actually cost the kernel anything to swap out and use the memory for something else, it moved the page to swap beforehand.

    I don't know if the kernel then co
    • by ls671 ( 1122017 )

      So disabling swap is a mistake based on the belief that only what you need is loaded into memory.

      I had a friend of mine who upgraded from 8 GB to 16 GB RAM and he was mad because Linux was using it as file/buffer cache! He wanted the OS to leave the memory alone and see 8 GB unused memory in top :)

  • That "stall" is what we used to call page thrashing. There's actually 3 levels of "memory" for things in virtual memory: RAM for active pages, swap space for inactive pages, and the disk file itself for inactive pages that're backed by executables or libraries or data files. If you overload RAM and don't have swap space the system has to discard executable code or disk-backed data, and those are the slowest to discard or re-load when they're needed (as they will quickly be, since by overloading RAM you're f

  • by m.dillon ( 147925 ) on Sunday August 11, 2019 @11:39AM (#59076658) Homepage

    An unbelievable amount of junk is being posted. The short answer is: Always run with swap installed on a Linux or BSD system, period. If you don't, you're an idiot. As to why? There are many, many reasons. How much? Generally as much as main memory. If you have a tiny amount of memory, more, if you have tons of memory, like a terrabyte, then less. And these days it absolutely should be on a SSD. SSDs are not expensive... think about it. Do you want 40GB of swap? It's a $20 SSD. Frankly, just putting the swap on your main SSD (if you have one) works just as well. It won't wear out from paging.

    Linux and BSD kernels are really good at paging out only what they really need to page out, only paging when there is actual memory pressure, doing it in batches (no need to worry about SSD swap write amplification much these days... writes to swap are batched and are actually not so random. Its the reads that tend to be more random). I started using SSDs all the way back in the days of the Intel 40GB consumer drives, much of them for swap, and have yet to reach the wear limit for any of them. And our machines get used heavily. SSDs wear out from doing other stupid things... swap is not usually on the list of stupid things. The days of just random paging to swap for no reason are long over. Windows... YMMV.

    Without swap configured you are wasting an enormous amount of relatively expensive ram to hold dirty data that the kernel can't dispose of. People really underestimate just how much of this sort of data systems have... it can be huge, particularly now with bloated browsers like Chrome, but also simply things like TMPFS which is being used more heavily every day. Without swap configured if memory gets tight the ONLY pages the kernel can evict are shared read-only file-backed pages.... generally 'text' pages (aka code). These sorts of pages are not as conducive to paging as data pages and it won't take long for the system to start to thrash (this is WITHOUT swap) by having to rip away all the program code and then instantly page it back in again. WITH swap, dirty data pages can be cleaned by flushing them to swap.

    Configure swap, use SSDs. If you are worried about wear, just check the wear every few months but honestly I have never worn out a SSD by having swap configured on it.... and our systems can sometimes page quite heavily when doing bulk package builds. Sometimes as much as 100GB might be paged out, but it allows us to run much more aggressive concurrency settings and still utilize all available CPU for most of the bulk run.

    So here are some bullets.

    1. Systems treat memory as SWAP+RAM, unless you disable over-commit. Never disable over-commit on a normal system. The SWAP is treated like a late-level cache. CPU, L1, L2, L3, [L4], RAM, SWAP. Like that. The kernel breaks the RAM down into several queues... ACTIVE, INACTIVE, CACHE, then SWAP. Unless the system is completely overburdened, a Linux or BSD kernel will do a pretty damn good job keep your GUI smooth even while paging dead browser data away.

    2. Kernels do not page stuff out gratuitously. If there is no memory pressure, there will be no paging, even if the memory caches are not 'balanced'.

    3. There is absolute no reason to waste memory holding dirty data from idle programs or browser tabs. If you are running a desktop browser, swap is mandatory and your life will be much better for it.

    4. Same is true for (most) SERVERs. Persistent sessions are the norm these days, and 99% of those will be idle long-term. With swap the server can focus on the ones that aren't, and paging in an idle session from a SSD takes maybe 1/10 of a second.

    5. CPU overhead for paging is actually quite low, and getting lower every day. Obviously if a program stalls on a swapped page that has to be paged in you might notice it, but the actual CPU overhead is almost zip.

    6. The RAM required to manage swap space is approximately 1 MByte per 1 GByte of swap. I regularly run hundreds of gigab

When you are working hard, get up and retch every so often.

Working...