On Tue, Apr 18, 2023 at 11:19:40AM -0400, Peter Xu wrote: > On Mon, Apr 17, 2023 at 03:47:57PM -0700, Suren Baghdasaryan wrote: > > On Mon, Apr 17, 2023 at 2:26 PM Peter Xu <peterx@xxxxxxxxxx> wrote: > > > > > > On Fri, Apr 14, 2023 at 05:08:18PM -0700, Suren Baghdasaryan wrote: > > > > @@ -5223,8 +5230,8 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, > > > > if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM)) > > > > mem_cgroup_oom_synchronize(false); > > > > } > > > > - > > > > - mm_account_fault(regs, address, flags, ret); > > > > +out: > > > > + mm_account_fault(mm, regs, address, flags, ret); > > > > > > Ah, one more question.. can this cached mm race with a destroying mm (just > > > like the vma race we wanted to avoid)? Still a question only applies to > > > COMPLETE case when mmap read lock can be released. Thanks, > > > > I believe that is impossible because whoever is calling the page fault > > handler has stabilized the mm by getting a refcount. > > Do you have a hint on where that refcount is taken? ... when we called clone()? A thread by definition has a reference to its own mm. > Btw, it's definitely not a question sololy for this patch but a more common > question to the page fault path. It's just that when I wanted to look for > any refcount boost (which I also expect to have somewhere) I didn't really > see that in current path (e.g. do_user_addr_fault() for x86_64). > > I also had a quick look on do_exit() but I also didn't see where do we > e.g. wait for all the threads to stop before recycles a mm. > > I had a feeling that I must have missed something, but just want to make > sure it's the case. > > -- > Peter Xu >