Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Operating Systems Linux IT Technology

Linux Performs Poorly In Low RAM / Memory Pressure Situations On The Desktop (phoronix.com) 569

It's been a gripe for many running Linux on low RAM systems especially is that when the Linux desktop is under memory pressure the performance can be quite brutal with the system barely being responsive. The discussion over that behavior has been reignited this week. From a report: Developer Artem S Tashkinov took to the kernel mailing list over the weekend to express his frustration with the kernel's inability to handle low memory pressure in a graceful manner. If booting a system with just 4GB of RAM available, disabling SWAP to accelerate the impact/behavior, and launching a web browser and opening new web pages / tabs can in a matter of minutes bring the system down to its knees.

Artem elaborated on the kernel mailing list, "Once you hit a situation when opening a new tab requires more RAM than is currently available, the system will stall hard. You will barely be able to move the mouse pointer. Your disk LED will be flashing incessantly (I'm not entirely sure why). You will not be able to run new applications or close currently running ones. This little crisis may continue for minutes or even longer. I think that's not how the system should behave in this situation. I believe something must be done about that to avoid this stall."

This discussion has been archived. No new comments can be posted.

Linux Performs Poorly In Low RAM / Memory Pressure Situations On The Desktop

Comments Filter:
  • In 2019 (Score:5, Interesting)

    by BytePusher ( 209961 ) on Tuesday August 06, 2019 @01:51PM (#59052072) Homepage
    The computers are faster than ever, but they can't keep up with the pace of "full stack" developers bloat.
    • by hazem ( 472289 )

      I agree with that sentiment, especially all the bloat. But I'm not sure today's computers, at least laptops, really are faster than ever.

      I recently met with a friend who had just gotten a new Dell XPS and we both installed Anylogic and tried out one of the sample simulations. We started running it at about the same time but on my 8 year old Lenovo T430 finished the run in about 2/3 the time as the new XPS.

      Now of course there are plenty of differences... I'm running Linux, his had Windows 10. And it's pos

  • It's swapping (Score:2, Informative)

    by mknewman ( 557587 )
    That's why it gets orders of magnitude slower, and the disk starts going crazy. Solution, add more memory or do less in memory. There are all kinds of ways to reduce memory usage.
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Tuesday August 06, 2019 @01:58PM (#59052134)
      Comment removed based on user account deletion
      • Re:It's swapping (Score:4, Informative)

        by Prien715 ( 251944 ) <agnosticpope@gmail. c o m> on Tuesday August 06, 2019 @02:17PM (#59052368) Journal

        Generally when Linux slows to a crawl, the OS has a poor time distinguishing between different tasks running through the desktop which all need to communicate with the X11 library. Once one task goes rogue, its interaction with X means that other tasks performances are going to be shoddy. There are two good solutions. First, you can hit Ctrl+alt+f2 through f6 to get to a virtual terminal. Since it has no interaction with the X11 library, it's quite speedy once you are able to log in. A second solution is to SSH into the machine from another machine and the same speed and technical reasoning applies here as well.

        • Comment removed (Score:5, Interesting)

          by account_deleted ( 4530225 ) on Tuesday August 06, 2019 @02:43PM (#59052622)
          Comment removed based on user account deletion
          • (And either way, it's a horrible solution. "Dude! There's a bug in the kernel that means it goes to shit when it runs out of memory!" "OMG, are you an idiot? Just connect your phone to the Wifi, run ConnectBot, connect to your laptop's IP address - YOU DO REMEMBER THE IP ADDRESS RIGHT? - and type "killall 'Web Content' ; killall 'firefox'", duh!")

            It's friggin' embarrassing. I run a mainframe operating system on my primary desktop. An operating system that runs SAP instances and the national banks of G7 coun

        • Comment removed (Score:5, Interesting)

          by account_deleted ( 4530225 ) on Tuesday August 06, 2019 @02:46PM (#59052644)
          Comment removed based on user account deletion
        • Alas, that works well only with vgacon -- both nouveau and amdgpu take a long time to switch consoles during a swappeathon. And it's worse on newer machines: on a 6 core box, it was a matter of like 20 seconds, on my current 64-way desktop I pretty much need to ssh or serial in. (Obviously assuming your load swaps on all threads.)

        • A second solution is to SSH into the machine from another machine and the same speed and technical reasoning applies here as well.

          Got it. Keep a second computer around to be able to use it to fix your Linux desktop when it goes haywire.

      • A terminal is not an OS level function. And even if it were, the window containing the terminal (assuming GUI) is not.

        The console - no GUI - should remain available but I haven't tried it in a while.

      • It wasn't a problem before Wayland. Kill-X works fine and quickly.
    • linux 'desktop' is a fucking pig, at this point!

      I am one of those that runs a window mgr (fvwm1) and no desktop, no drag-drop, no trashcan icon, no gnome and no tens of procs just there to 'support' the stupid desktop paradigm.

      go back to the old days and things are still pretty good. you can run linux well in 4gb of ram; for a long time, my system had only 2gb and things were still fine.

      tl;dr: dump the desktop and go lean and mean

    • Re:It's swapping

      Is it?
      "If booting a system with just 4GB of RAM available, disabling SWAP to accelerate the impact/behavior ..."

    • Re:It's swapping (Score:4, Insightful)

      by goose-incarnated ( 1145029 ) on Tuesday August 06, 2019 @04:13PM (#59053334) Journal

      That's why it gets orders of magnitude slower, and the disk starts going crazy. Solution, add more memory or do less in memory. There are all kinds of ways to reduce memory usage.

      I kinda agree. I do my primary development on a 2GB RAM Linux VM, running WindowMaker (16 desktops) and various small apps (lots of terminals, a calculator now and then, PDF reader, word processor, etc).

      Using the browser with 12 tabs (Mostly Jira, bitbucket and gmail) moves my RAM usage from +-600MB to 1.8GB (+ 300MB swap). If I instead start up IntelliJ or Eclipse (for those rare occasions when I do something in Java) I fill up the RAM and swap goes up to 1.5GB.

      For most people, it's only the browser that uses memory uncontrollably. As an example, close your browser completely, then reopen with only a single tab that goes to slashdot - my experiment on this resulted in the browser using around 400MB of RAM with only slashdot open. Each time I refreshed the page after login, it used another50MB or so of RAM.

      For around a few MB of actual non-advertisement content, the browser will grab a couple 100s of MB of RAM and never let it go again!. Honestly, I don't even blame the websites at this point - the browser should be able to cut off unwanted content at the knees... but of course, they don't.

      For long-lived tabs that haven't been active for a significant time, the browser should flush that crap out to disk (they keep it all in RAM, images and apparently video too!).

      For tabs that go over some threshold (calculated as a percentage of all tabs open, maybe) they should stop all network traffic (put a button saying "This site is sending 200MB, press to continue receiving this data").

      For javascript, why is there a sandbox that cannot constrain the memory being used?

      Why can't the browsers shut off the auto-fucking-playing ads? At least provide me with a button that makes *all* video content downloaded only on request.

      If browsers made the toxic sites perform poorly so that the rest of the system is responsive, sites would very soon get the fucking message and change their ways. There are only two competing browsers on the desktop. If one of them enforced good behaviour from websites the other one will soon follow due to a loss of users.

      Unfortunately, of the two competing browsers one is unashamedly produced as an ad delivery mechanism with actual content being an afterthought, while the other is busy self-fellating to a new programming language that won't improve things one bit anyway.

      The world needs a new browser (well, a rendering engine, anyway). It will take Mozilla too long to collapse under their own gravity, and once they do there is no promise that from the ashes a slimmer browser will emerge.

  • Try running Solaris under memory pressure and you can see what "running poorly" actually means. The fact of the matter is that memory pressure will always cost you a lot of performance and responsiveness, there is no way around that. The one thing that could actually be improved is making mouse and keyboard more real-time, but even that will not help much. Other than that, get more memory. And maybe avoid memory-hungry applications. Also, disabling swap? Seriously? Swap is what keeps the system alive under

  • Increase the /proc/sys/vm/watermark_scale_factor from the default 10 to 500 (Others have reported working at 200, I tested with 100, and that wasn't enough, but 500 works great.)

    The system will the be able to swap cleanly under memory pressure without the stalling.0

    • The system will the be able to swap cleanly under memory pressure without the stalling

      They turned off swap in this test because they are looking to make balancing within the memory model.......make better guesses. Yes, you can thrash all day long, that's beside the point of these tests.

      • by Rashkae ( 59673 )

        The LKML Post here: https://lkml.org/lkml/2019/8/4... [lkml.org]

        It doesn't say anything about disabling swap.

        I know well the problem that is being described here, and did much testing to find a solution that works well (with current kernels, not hacking the kernel code itsefl)

  • Am I getting old? (Score:5, Interesting)

    by Dartz-IRL ( 1640117 ) on Tuesday August 06, 2019 @01:59PM (#59052154)

    'Just' 4GB of RAM

    It wasn't that long ago that I had Xubuntu humming away on on a Pentium M withg 256 Megabytes.

    Was it that long ago? When was 7.10 released?

    Where does it all go?

    • Slackware 14.2 performs just fine running XFCE or Enlightenment on a single core laptop maxed out at two gigs of RAM.
    • by jandrese ( 485 )
      Web browsers decided that every tab needs to be its own process for one.

      I have one desktop that is an Raspberry Pi model B (the original) and it's pretty clear how performance will nosedive if you open more than 1 or maybe 2 browser tabs. It's actually kind of usable with just one tab though. Enough to do google searches and read forum replies to help solve problems in other areas. The other thing that kills the box is big compiles, especially for programs like Wireshark where some of the .c files w
    • Was it that long ago? When was 7.10 released?

      I think it was in 2009. Anyway, one laptop got in 2009 had 2GB RAM and Win7. Ran decently.Now, it can barely run a web browser, though old versions of Gimp, InkScape and Audacity run decently.

      In recent years, I've been using it for on location sound recording, running a 2012 version of Audacity with an old Teac 4 channel, USB sound interface and an older Crown quad microphone pre-amp stuffed in an aluminum attache case. (The laptop fits in the lid of the case. To setup, take out the laptop, plug in power an

  • Well, don't do that..

    For Pete's sake, who disables the swap space? There is a REASON we have that area on disk, just like Windows has the page file on disk. It is there to temporally store memory pages that are not part of a process that's scheduled to run. So things like initialization code which ends up in your processes memory space can be swapped out to make room for more pages. This is true on Windows AND Linux.

    By the way.. This isn't a Linux only problem. If you disabled the windows page file spa

    • by UnknownSoldier ( 67820 ) on Tuesday August 06, 2019 @02:26PM (#59052490)

      > For Pete's sake, who disables the swap space

      Do you even understand what swap is used for and why it was designed???

      To SIMULATE more RAM then physically exists. If you have enough RAM you won't even be swapping in the FIRST place.

      > I've never done it, but I'm just guessing

      If you don't know what the fuck you are talking about then why are you giving advice???

      My Win 7 dev/gaming box has 32GB RAM and has been running with no swap for 10 years. Why? Because:

      1. I **don't** want Windows shortening the life of my mechanical drive or SSDs, and
      2. I want the app to crash instantly if there isn't enough free memory instead of Windows swaps for 5 minutes while it figures out "Gee, there isn't enough of a worker set of memory -- let's constantly thrash while I evict and reload from/to disk/memory."

      Just because YOU require swap doesn't imply everyone has the same use case.

      > Add more memory,

      And how exactly do you do that on embedded devices or where the RAM is physically soldered in??

      Found the noob who thinks "Just throw more money/hardware at the problem has no other solutions.". **Facepalm**

      Instead of treating the SYMPTOM how about addressing the issue of CAUSE of WHY you need to swap in the first place.

      --
      Adult Religion = Kindergarten Spirituality
      e.g. One of the lies of Judaism, Christianity, and Islam: God commanded his children to murder one another.

    • For Pete's sake, who disables the swap space?

      Because they're testing the kernel's memory model. If the swap was on, then they'd have to get through a lot of thrashing which would start bringing disk IO into the picture. They're specifically trying to test the kernel's balancing model for active and inactive, which hasn't been the greatest. Long story short, they specifically did this to test something very specific in the kernel.

      At least your Linux box stays running until the program soaking up all your RAM hit's malloc and get's nothing back and crashes the process (assuming somebody is actually *checking* that malloc was successful.. )

      Legacy code I give a pass to and very, very specific exceptions as well, but no one should be using malloc anymore.

    • Always disable swap. Helps to detect if there's stuff running that should not be. Also saves wear and tear on both data and ssd storage.

      Since the problem is pretty much down to web browsers and bloated desktop environments, getting rid of those two should mean never needing swap.

      It would be good for cpu makers to put out chips that don't have the capability to use swap space as virtual ram. They'd probably be more energy efficient, and save a bit of die space, while allowing for faster overall performan

    • For Pete's sake, who disables the swap space?

      It doesn't matter, because the problem exists even with swap on. The point in disabling swap in this case is simply to make it easier to replicate the scenario where memory pressure is high. You can encounter the exact same problem with 32GB of RAM and 2TB of swap space -- it just takes forever to load enough data to get the level of memory pressure needed to exemplify the problem.

      Yaz

  • Do you want a system that's zippy until it's almost 100% resource-constrained, then it falls off a cliff and becomes barely usable or worse?

    Or do you want one that's zippy until it's, say, 80% resource-constrained then it slowly gets worse and worse until it's barely-usable or worse when it's almost 100% resource-constrained?

    That's actually a legitimate question with no one right answer: For some use cases, you WANT "give me all you got until you ain't got no more" and for others, you WANT graceful degrada

  • The issue here isn't that the kernel starts swapping like mad to keep up with memory requests exceed the amount of system memory, the real issue is that you have an application that behaves badly. The application in question of course is no other than a browser which in recent years has morphed into a miniature operating system of it's own. The proper solution of course is to build pages on demand rather than trying to cache them all in RAM.

    This is of course a complete non-story.

    • by slack_justyb ( 862874 ) on Tuesday August 06, 2019 @02:40PM (#59052592)

      The application in question of course is no other than a browser

      That's why the person decided to use that as part of their test. The issue is to test the kernel's ability to handle active/inactive page balancing and a good way to quickly get there is to get something that does exactly what you indicate a browser is doing. A web browser (Chrome/Firefox) was specifically selected to get the person there reasonably quick. It's not that they wanted to browse the web, that's besides the point. The point was to fill up the kernel's page listing as fast as possible and then see what the kernel would do in that situation, hence the reason for the boot string of memory at 4GB and swap off.

      The kernel's memory management model isn't exactly the world's greatest and they're testing ways and methods to make it better. So any piece of software that would create loads of pages that the memory model would need to keep up with is a good piece of software to use to put some stress on the system. This is the LKML, not 1800-HELP-MY-KERNEL. In short, they're specifically selecting things to test something. Not moaning about how their web browser can't browse Facebook.

      The proper solution of course is to build pages on demand rather than trying to cache them all in RAM

      And if it did do that, then it wouldn't be very useful for them to use such a browser in this kind of testing, and thus they would have selected some other piece of software that would create a ton of pages of memory. The issue being discussed isn't browsers, why would kernel programmers care a flip about a browser?

  • by Larsen E Whipsnade ( 4686581 ) on Tuesday August 06, 2019 @02:04PM (#59052220)
    (which I've seen for years on Debian)

    In /etc/security/limits.conf:
    myself hard nproc 500
    myself hard rss 1222000

    Adjust to taste. This way a tab will crash and that's all.

    But really, this should be fixed at system level, perhaps in the kernel. We shouldn't have to fiddle with settings. Ideally, nothing in user space should have the ability to render the entire system unresponsive. That's denial of service.
    • by Ken_g6 ( 775014 )

      Nice try.

      man limits.conf | grep -A2 rss
                            rss
                                    maximum resident set size (KB) (Ignored in Linux 2.4.30 and
                                    higher)

  • by slack_justyb ( 862874 ) on Tuesday August 06, 2019 @02:26PM (#59052480)

    I'm sure many will get on here and discuss the bloat of recent GUI toolkits in the Linux world. However, what Artem S Tashkinov is speaking about isn't, "Geez these toolkits are shit." This has been an issue with Linux for quite some time now. The issue is memory management and more specifically, active/inactive list balancing. There's an intro on the matter over at LWN here [lwn.net] that can sort of get you up to speed on the situation. It's from 2012, so things have been implemented to some extent and ideas have changed about how to truly "solve this" since then. So a good history lesson.

    The whole 4GB and swap off is a setup to see how long it takes before the kernel doesn't know what the eff to do with the increasing number of pages that are active and inactive. Using the browser and GUI is a good and quick way to make the kernel go into stress. Obviously, if swap was on, then the kernel could just simply use that and thrashing would ensue. That would kind of defeat the entire test though. 4GB is there to ensure the list of active and inactive is good and long, something the kernel has had issues with.

    The current idea for a solution is to use the kernel's PSI (Pressure Stall Information) model that is exposed via /proc/pressure and that information would be used by the Kernel to make better decisions. As well as give userland information that they could potentially use. There's already information in the kernel about refaults who Johannes Weiner, who has worked on this for some time now, speaks about here on the LKML. [lkml.org] However, PSI is currently optional within the kernel (defaults to N in the config) and the kernel itself doesn't contain the framework to make that data usable (that is the kernel can totally see that information but doesn't do anything with it in the memory management model).

    Other vendors of Linux have written userland software that reads in /proc/pressure and does something useful with that information. Android has lmkd daemon. Facebook has an open source daemon called oomd [fb.com]. The idea would be to do something similar to these two examples but in kernel.

    And with that, you are now caught up to what is actually being tested here in the kernel. You can keep doing your "GUI is bloated shit" conversations still, but at least now you all know what the actual topic is being discussed here.

  • Using this machine: https://bryanquigley.com/posts... [bryanquigley.com]

    With zram enabled (it's the only "swap") and a full Gnome based Ubuntu desktop (Wayland) with Firefox (7 tabs) and Chrome (3 tabs) running .

    Maybe just enable a small zram by default?

    • by Ken_g6 ( 775014 )

      Interesting! It looks like if you insist on entirely disabling swap like in TFA, zram is the way to go. On the other hand, if you work in the real world and leave swap on, zswap [wikipedia.org] might be better.

  • I've been more than happy to put up with any problems encountered using Linux. The price of M$ software is just too high.

  • In 2039 this article will be repeated about how their computer with “only” 64GB ram is slowing to a crawl. Electron and Chromium using programmers get employed in top jobs while programmers of real languages are on the dole.
  • Electron app developers said 64GB is more memory than anyone will ever need on a computer!
  • That's why I've learned to use ionice -c3 on every disk-intensive process where possible.

  • by geekmux ( 1040042 ) on Tuesday August 06, 2019 @03:00PM (#59052758)

    The Year of the Linux Desktop, actually happened in 2015.

    The reason we didn't notice, is obvious now.

    It's still buffering...

  • by chrylis ( 262281 ) on Tuesday August 06, 2019 @06:12PM (#59054052)

    Addressing a number of comments in this thread, as an emergency workaround, use SysRq + F. This invokes the OOM killer explicitly long before the kernel would have done it itself and is often preferable to doing a hard reboot.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...