Jason Gunthorpe <jgg@xxxxxxxxxx> writes: >> the owner and pinner disagreeing with each other. At least >> conceptually, the solution is rather straight-forward - whoever pins >> a page should also claim the ownership of it. > > If the answer is pinner is owner, then multi-pinners must mean > multi-owner too. We probably can't block multi-pinner without causing > uAPI problems. It seems the problem is how to track multiple pinners of the page. At the moment memcg ownership is stored in folio->memcg_data which basically points to the owning struct mem_cgroup *. For pinning this series already introduces this data structure: struct vm_account { struct task_struct *task; struct mm_struct *mm; struct user_struct *user; enum vm_account_flags flags; }; We could modify it to something like: struct vm_account { struct list_head possible_pinners; struct mem_cgroup *memcg; [...] }; When a page is pinned the first pinner takes ownership and stores it's memcg there, updating memcg_data to point to it. This would require a new page_memcg_data_flags but I think we have one left. Subsequent pinners create a vm_account and add it to the pinners list. When a driver unpins a page we scan the pinners list and assign ownership to the next driver pinning the page by updating memcg_data and removing the vm_account from the list. The problem with this approach is each pinner (ie. struct vm_account) may cover different subsets of pages. Drivers have to store a list of pinned pages somewhere, so we could query drivers or store the list of pinned pages in the vm_account. That seems like a fair bit of overhead though and would make unpinning expensive as we'd have to traverse several lists. We'd also have to ensure possible owners had enough free memory in the owning memcg to accept having the page charged when another pinner unpins. That could be done by reserving space during pinning. And of course it only works for pin_user_pages() - other users don't always have a list of pages conveniently available although I suppose they could walk the rmap, but again that adds overhead. So not sure it's a great solution but figured I'd leave it here in case it triggers other ideas. > You are not wrong on any of these remarks, but this looses sight of > the point - it is take the existing broken RLIMIT scheme and make it > incrementally better by being the same broken scheme just with > cgroups. Right. RLIMIT_MEMLOCK is pretty broken because most uses enforce it against a specific task so it can be easily bypassed. The aim here was to make it at least possible to enforce a meaningful limit. > If we eventually fix everything so memcg can do multi-pinners/owners > then would it be reasonable to phase out the new pincg at that time? > > Jason