On Wed, Feb 19, 2020 at 11:13:21AM -0800, Dave Hansen wrote: > On 2/19/20 10:25 AM, Sultan Alsawaf wrote: > > Keeping kswapd running when all the failed allocations that invoked it > > are satisfied incurs a high overhead due to unnecessary page eviction > > and writeback, as well as spurious VM pressure events to various > > registered shrinkers. When kswapd doesn't need to work to make an > > allocation succeed anymore, stop it prematurely to save resources. > > But kswapd isn't just to provide memory to waiters. It also serves to > get free memory back up to the high watermark. This seems like it might > result in more frequent allocation stalls and kswapd wakeups, which > consumes extra resources. > > I guess I'd wonder what positive effects you have observed as a result > of this patch and whether you've gone looking for any negative effects. This patch essentially stops kswapd from going overboard when a failed allocation fires up kswapd. Otherwise, when memory pressure is really high, kswapd just chomps through CPU time freeing pages nonstop when it isn't needed. On a constrained system I tested (mem=2G), this patch had the positive effect of improving overall responsiveness at high memory pressure. On systems with more memory I tested (>=4G), kswapd becomes more expensive to run at its higher scan depths, so stopping kswapd prematurely when there aren't any memory allocations waiting for it prevents it from reaching the *really* expensive scan depths and burning through even more resources. Combine a large amount of memory with a slow CPU and the current problematic behavior of kswapd at high memory pressure shows. My personal test scenario for this was an arm64 CPU with a variable amount of memory (up to 4G RAM + 2G swap). Sultan