On 09/08/2019 09:57, Michal Hocko wrote:
We already do have a reserve (min_free_kbytes). That gives kswapd some room to perform reclaim in the background without obvious latencies to allocating tasks (well CPU still be used so there is still some effect).
I tried this option in the past. Unfortunately, I didn't prevent freezes. My understanding is this option reserves some amount of memory to not be swapped out but does not prevent the kernel from evicting all pages from cache when more memory is needed.
Kswapd tries to keep a balance and free memory low but still with some room to satisfy an immediate memory demand. Once kswapd doesn't catch up with the memory demand we dive into the direct reclaim and that is where people usually see latencies coming from.
Reclaiming memory is fine, of course, but not all the way to 0 caches. No caches means all executable pages, ro pages (e.g. fonts) are evicted from memory and have to be constantly reloaded on every user action. All this while competing with tasks that are using up all memory. This happens with of without swap, although swap does spread this issue in time a bit.
The main problem here is that it is hard to tell from a single allocation latency that we have a bigger problem. As already said, the usual trashing scenario doesn't show problem during the reclaim because pages can be freed up very efficiently. The problem is that they are refaulted very quickly so we are effectively rotating working set like crazy. Compare that to a normal used-once streaming IO workload which is generating a lot of page cache that can be recycled in a similar pace but a working set doesn't get freed. Free memory figures will look very similar in both cases.
Thank you for the explanation. It is indeed a difficult problem - some cached pages (streaming IO) will likely not be needed again and should be discarded asap, other (like mmapped executable/ro pages of UI utilities) will cause thrashing when evicted under high memory pressure. Another aspect is that PSI is probably not the best measure of detecting imminent thrashing. However, if it can at least detect a freeze that has already occurred and force the OOM killer that is still a lot better than a dead system, which is the current user experience.
Good that earlyoom works for you.
I am giving it as an example of a heuristic that seems to work very well for me. Something to look into. And yes, I wouldn't mind having such mechanism built into the kernel.
All I am saying is that this is not generally applicable heuristic because we do care about a larger variety of workloads. I should probably emphasise that the OOM killer is there as a _last resort_ hand break when something goes terribly wrong. It operates at times when any user intervention would be really hard because there is a lack of resources to be actionable.
It is indeed a last resort solution - without it the system is unusable. Still, accuracy matters because killing a wrong task does not fix the problem (a task hogging memory is still running) and may break the system anyway if something important is killed instead.
[...]
This is a useful feedback! What was your workload? Which kernel version?
I tested it by running a python script that processes a large amount of data in memory (needs around 15GB of RAM). I normally run 2 instances of that script in parallel but for testing I started 4 of them. I sometimes experience the same issue when using multiple regular memory intensive desktop applications in a manner described in the first post but that's harder to reproduce because of the user input needed.
[ 0.000000] Linux version 5.0.0-21-generic (buildd@lgw01-amd64-036) (gcc version 8.3.0 (Ubuntu 8.3.0-6ubuntu1)) #22-Ubuntu SMP Tue Jul 2 13:27:33 UTC 2019 (Ubuntu 5.0.0-21.22-generic 5.0.15)
AMD CPU with 4 cores, 8 threads. AMDGPU graphics stack. Best regards, ndrw