On Tue, 12 May 2020 09:26:34 +0200 Michal Hocko wrote: > On Mon 11-05-20 15:55:16, Jakub Kicinski wrote: > > Use swap.high when deciding if swap is full. > > Please be more specific why. How about: Use swap.high when deciding if swap is full to influence ongoing swap reclaim in a best effort manner. > > Perform reclaim and count memory over high events. > > Please expand on this and explain how this is working and why the > semantic is subtly different from MEMCG_HIGH. I suspect the reason > is that there is no reclaim for the swap so you are only emitting an > event on the memcg which is actually throttled. This is in line with > memory.high but the difference is that we do reclaim each memcg subtree > in the high limit excess. That means that the counter tells us how many > times the specific memcg was in excess which would be impossible with > your implementation. Right, with memory all cgroups over high get penalized with the extra reclaim work. For swap we just have the delay, so the event is associated with the worst offender, anything lower didn't really matter. But it's easy enough to change if you prefer. Otherwise I'll just add this to the commit message: Count swap over high events. Note that unlike memory over high events we only count them for the worst offender. This is because the delay penalties for both swap and memory over high are not cumulative, i.e. we use the max delay. > I would also suggest to explain or ideally even separate the swap > penalty scaling logic to a seprate patch. What kind of data it is based > on? It's a hard thing to get production data for since, as we mentioned we don't expect the limit to be hit. It was more of a process of experimentation and finding a gradual slope that "felt right"... Is there a more scientific process we can follow here? We want the delay to be small at first for a first few pages and then grow to make sure we stop the task from going too much over high. The square function works pretty well IMHO.