I remember a while back we were talking about how Linux swaps terribly (I think it was @Thanatermesis who called the code garbage).
I found this old message on the BGLUG mailing list and thought might be interesting. NOTE: Observations here were, according to the emailer, were conducted on kernel 4.6 (it's a message from July/August 2019). It may not work on later versions, and I can't really test right now (very very busy) so let me know how it works out for you.
One of the problems I've observed with recent linux kernels is the
difficulty the system has allocating large amounts of memory when it
memory usage is high. The system starts reading swap like crazy, (on
SSD, swap is reading at over 400MB/s), and the system becomes unusable.
In a test on a 8GB system, with firefox running several open tabs, and
then start a 4GB Virtual Machine, swap usage grows to 155MB, but then
the system gets stuck reading from the swap drive at 400MB/s for over 2
hours... (I don't know how long this would have continued, I forced quit
firefox to free ram and liberate the system. Swap never grew past 155MB
during this time.)
This problem only shows up if you're using more memory than you have
RAM, and many people now probably just buy more memory if they encounter
this kind of situation. I, however, do not feel that a system becoming
unusable for going over a few hundred MB makes sense on modern demand
At first, I experimented with the /proc/sys/vm/swappiness variable.
Setting this to 100 helped prevent the situation from happening in my
regular workload, (before I introduced a virtualized Windows 10). This
would force the kernel to keep cache size at about 1.5GB, which left
enough available ram to load new allocations used by all my programs.
However, it turns out that the variable I needed to fix was
Default value here is 10, it goes up to 1000. After some
experimentation, I figured out that 500 is the value I need to stop my
system form this bad behaviour. Now if I try to allocate a large chunk
of memory, the kernel is able to swap out enough on the fly for the
system to stay functional and stable. (I was actually able to test with
dozens of open firefox youtube instances and a 4GB Virtual Machine.
Total memory usage climbed to well over 11 GB (out of an 8GB system.)
The swapping was crazy, and probably would be death to an SSD to keep
that going forever, but unlike before, the system did not become
unusable while stuck in a perpetual swap reading loop.)