On Mon, Oct 07, 2019 at 04:57:15PM +0200, Vlastimil Babka wrote: > On 10/5/19 12:11 AM, Roman Gushchin wrote: > > > > One possible approach to this problem is to switch inodes associated > > with dying wbs to the root wb. Switching is a best effort operation > > which can fail silently, so unfortunately we can't run once over a > > list of associated inodes (even if we'd have such a list). So we > > really have to scan all inodes. > > > > In the proposed patch I schedule a work on each memory cgroup > > deletion, which is probably too often. Alternatively, we can do it > > periodically under some conditions (e.g. the number of dying memory > > cgroups is larger than X). So it's basically a gc run. > > > > I wonder if there are any better ideas? > > I don't know this area, so this will be likely easily shown impossible, > but perhaps it's useful to do that explicitly. > > What if instead of reparenting each inode, we "reparent" the wb? It seems to be an arguable idea, at least at the offlining moment. Dirty memory left after a cgroup should be written back using corresponding limits, and reparenting can easily break them. Also, it's not clear to me, how to reparent dirty stats? > But I see it's not a small object either. Could we then add some bias > for inode switching conditions so that anyone else touching the inode > from dead wb would get it immediately? You mean touching for writing? That's doable, but doesn't solve the case when there are only readers. And the case is quite common. > And what would happen if we reused the reparented wb's for newly created > cgroups? Would it "punish" them for the old inodes? > No idea, to be honest. Thank you!