On Fri, Apr 17, 2020 at 9:03 AM Michal Hocko <mhocko@xxxxxxxxxx> wrote: > > On Fri 17-04-20 22:43:43, Alex Shi wrote: > > This patch fold MEMCG_SWAP feature into kernel as default function. That > > required a short size memcg id for each of page. As Johannes mentioned > > > > "the overhead of tracking is tiny - 512k per G of swap (0.04%).' > > > > So all swapout page could be tracked for its memcg id. > > I am perfectly OK with dropping the CONFIG_MEMCG_SWAP. The code that is > guarded by it is negligible and the resulting code is much easier to > read so no objection on that front. I just do not really see any real > reason to flip the default for cgroup v1. Why do we want/need that? > Yes, the changelog is lacking the motivation of this change. This is proposed by Johannes and I was actually expecting the patch from him. The motivation is to make the things simpler for per-memcg LRU locking and workingset for anon memory (Johannes has described these really well, lemme find the email). If we keep the differentiation between cgroup v1 and v2, then there is actually no point of this cleanup as per-memcg LRU locking and anon workingset still has to handle the !do_swap_account case. Shakeel