On Thu, Feb 23, 2023 at 10:11 AM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote: > > On Thu, Feb 23, 2023 at 10:03:50AM -0800, Yosry Ahmed wrote: > > On Thu, Feb 23, 2023 at 9:28 AM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote: > > > > > > On Thu, Feb 23, 2023 at 09:18:23AM -0800, T.J. Mercier wrote: > > > > > > > > Solving that problem means figuring out when every cgroup stops using > > > > > the memory - pinning or not. That seems to be very costly. > > > > > > > > > This is the current behavior of accounting for memfds, and I suspect > > > > any kind of shared memory. > > > > > > > > If cgroup A creates a memfd, maps and faults in pages, shares the > > > > memfd with cgroup B and then A unmaps and closes the memfd, then > > > > cgroup A is still charged for the pages it faulted in. > > > > > > As we discussed, as long as the memory is swappable then eventually > > > memory pressure on cgroup A will evict the memfd pages and then cgroup > > > B will swap it in and be charged for it. > > > > I am not familiar with memfd, but based on > > mem_cgroup_swapin_charge_folio() it seems like if cgroup B swapped in > > the pages they will remain charged to cgroup A, unless cgroup A is > > removed/offlined. Am I missing something? > > Ah, I don't know, Tejun said: > > "but it can converge when page usage transfers across cgroups > if needed." > > Which I assumed was swap related but I don't know how convergence > works. I believe that's the case for file-backed pages, but I do not believe it's the case for swap-backed pages. > > Jason