On Thu 23-06-22 10:26:11, Yosry Ahmed wrote: > On Thu, Jun 23, 2022 at 10:04 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > > > On Thu 23-06-22 09:42:43, Shakeel Butt wrote: > > > On Thu, Jun 23, 2022 at 9:37 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > > > > > > > On Thu 23-06-22 09:22:35, Yosry Ahmed wrote: > > > > > On Thu, Jun 23, 2022 at 2:43 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > > > > > > > > > > > On Thu 23-06-22 01:35:59, Yosry Ahmed wrote: > > > > [...] > > > > > > > In our internal version of memory.reclaim that we recently upstreamed, > > > > > > > we do not account vmpressure during proactive reclaim (similar to how > > > > > > > psi is handled upstream). We want to make sure this behavior also > > > > > > > exists in the upstream version so that consolidating them does not > > > > > > > break our users who rely on vmpressure and will start seeing increased > > > > > > > pressure due to proactive reclaim. > > > > > > > > > > > > These are good reasons to have this patch in your tree. But why is this > > > > > > patch benefitial for the upstream kernel? It clearly adds some code and > > > > > > some special casing which will add a maintenance overhead. > > > > > > > > > > It is not just Google, any existing vmpressure users will start seeing > > > > > false pressure notifications with memory.reclaim. The main goal of the > > > > > patch is to make sure memory.reclaim does not break pre-existing users > > > > > of vmpressure, and doing it in a way that is consistent with psi makes > > > > > sense. > > > > > > > > memory.reclaim is v2 only feature which doesn't have vmpressure > > > > interface. So I do not see how pre-existing users of the upstream kernel > > > > can see any breakage. > > > > > > > > > > Please note that vmpressure is still being used in v2 by the > > > networking layer (see mem_cgroup_under_socket_pressure()) for > > > detecting memory pressure. > > > > I have missed this. It is hidden quite good. I thought that v2 is > > completely vmpressure free. I have to admit that the effect of > > mem_cgroup_under_socket_pressure is not really clear to me. Not to > > mention whether it should or shouldn't be triggered for the user > > triggered memory reclaim. So this would really need some explanation. > > vmpressure was tied into socket pressure by 8e8ae645249b ("mm: > memcontrol: hook up vmpressure to socket pressure"). A quick look at > the commit log and the code suggests that this is used all over the > socket and tcp code to throttles the memory consumption of the > networking layer if we are under pressure. > > However, for proactive reclaim like memory.reclaim, the target is to > probe the memcg for cold memory. Reclaiming such memory should not > have a visible effect on the workload performance. I don't think that > any network throttling side effects are correct here. Please describe the user visible effects of this change. IIUC this is changing the vmpressure semantic for pre-existing users (v1 when setting the hard limit for example) and it really should be explained why this is good for them after those years. I do not see any actual bug being described explicitly so please make sure this is all properly documented. -- Michal Hocko SUSE Labs