* Peter Zijlstra <peterz@xxxxxxxxxxxxx> [241218 05:06]: > On Wed, Dec 18, 2024 at 10:41:04AM +0100, Peter Zijlstra wrote: > > On Tue, Dec 17, 2024 at 08:27:46AM -0800, Suren Baghdasaryan wrote: > > > > > > So I just replied there, and no, I don't think it makes sense. Just put > > > > the kmem_cache_free() in vma_refcount_put(), to be done on 0. > > > > > > That's very appealing indeed and makes things much simpler. The > > > problem I see with that is the case when we detach a vma from the tree > > > to isolate it, then do some cleanup and only then free it. That's done > > > in vms_gather_munmap_vmas() here: > > > https://elixir.bootlin.com/linux/v6.12.5/source/mm/vma.c#L1240 and we > > > even might reattach detached vmas back: > > > https://elixir.bootlin.com/linux/v6.12.5/source/mm/vma.c#L1312. IOW, > > > detached state is not final and we can't destroy the object that > > > reached this state. > > > > Urgh, so that's the munmap() path, but arguably when that fails, the > > map stays in place. > > > > I think this means you're marking detached too soon; you should only > > mark detached once you reach the point of no return. > > > > That said, once you've reached the point of no return; and are about to > > go remove the page-tables, you very much want to ensure a lack of > > concurrency. > > > > So perhaps waiting for out-standing readers at this point isn't crazy. > > > > Also, I'm having a very hard time reading this maple tree stuff :/ > > Afaict vms_gather_munmap_vmas() only adds the VMAs to be removed to a > > second tree, it does not in fact unlink them from the mm yet. Yes, that's correct. I tried to make this clear with a gather/complete naming like other areas of the mm. I hope that helped. Also, the comments for the function state that's what's going on: * vms_gather_munmap_vmas() - Put all VMAs within a range into a maple tree * for removal at a later date. Handles splitting first and last if necessary * and marking the vmas as isolated. ... might be worth updating with new information. > > > > AFAICT it's vma_iter_clear_gfp() that actually wipes the vmas from the > > mm -- and that being able to fail is mind boggling and I suppose is what > > gives rise to much of this insanity :/ This is also correct. The maple tree is a b-tree variant that has internal nodes. When you write to it, including nulls, they are tracked and may need to allocate. This is a cost for rcu lookups; we will use the same or less memory in the end but must maintain a consistent view of the ranges. But to put this into perspective, we get 16 nodes per 4k page, most writes will use 1 or 3 of these from a kmem_cache, so we are talking about a very unlikely possibility. Except when syzbot decides to fail random allocations. We could preallocate for the write, but this section of the code is GFP_KERNEL, so we don't. Preallocation is an option to simplify the failure path though... which is what you did below. > > > > Anyway, I would expect remove_vma() to be the one that marks it detached > > (it's already unreachable through vma_lookup() at this point) and there > > you should wait for concurrent readers to bugger off. > > Also, I think vma_start_write() in that gather look is too early, you're > not actually going to change the VMA yet -- with obvious exception of > the split cases. The split needs to start the write on the vma to avoid anyone reading it while it's being altered. > > That too should probably come after you've passes all the fail/unwind > spots. Do you mean the split? I'd like to move the split later as well.. tracking that is a pain and may need an extra vma for when one vma is split twice before removing the middle part. Actually, I think we need to allocate two (or at least one) vmas in this case and just pass one through to unmap (written only to the mas_detach tree?). It would be nice to find a way to NOT need to do that even.. I had tried to use a vma on the stack years ago, which didn't work out. > > Something like so perhaps? (yeah, I know, I wrecked a bunch) > > diff --git a/mm/vma.c b/mm/vma.c > index 8e31b7e25aeb..45d43adcbb36 100644 > --- a/mm/vma.c > +++ b/mm/vma.c > @@ -1173,6 +1173,11 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms, > struct vm_area_struct *vma; > struct mm_struct *mm; > mas_set(mas_detach, 0); > + mas_for_each(mas_detach, vma, ULONG_MAX) { > + vma_start_write(next); > + vma_mark_detached(next, true); > + } > + > mm = current->mm; > mm->map_count -= vms->vma_count; > mm->locked_vm -= vms->locked_vm; > @@ -1219,9 +1224,6 @@ static void reattach_vmas(struct ma_state *mas_detach) > struct vm_area_struct *vma; > > mas_set(mas_detach, 0); Drop the mas_set here. > - mas_for_each(mas_detach, vma, ULONG_MAX) > - vma_mark_detached(vma, false); > - > __mt_destroy(mas_detach->tree); > } > > @@ -1289,13 +1291,11 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms, > if (error) > goto end_split_failed; > } > - vma_start_write(next); > mas_set(mas_detach, vms->vma_count++); > error = mas_store_gfp(mas_detach, next, GFP_KERNEL); > if (error) > goto munmap_gather_failed; > > - vma_mark_detached(next, true); > nrpages = vma_pages(next); > > vms->nr_pages += nrpages; > @@ -1431,14 +1431,17 @@ int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, > struct vma_munmap_struct vms; > int error; > The preallocation needs to know the range being stored to know what's going to happen. vma_iter_config(vmi, start, end); > + error = mas_preallocate(vmi->mas); We haven't had a need to have a vma iterator preallocate for storing a null, but we can add one for this. > + if (error) > + goto gather_failed; > + > init_vma_munmap(&vms, vmi, vma, start, end, uf, unlock); > error = vms_gather_munmap_vmas(&vms, &mas_detach); > if (error) > goto gather_failed; > Drop this stuff. > error = vma_iter_clear_gfp(vmi, start, end, GFP_KERNEL); > - if (error) > - goto clear_tree_failed; > + VM_WARN_ON(error); Do this instead vma_iter_config(vmi, start, end); vma_iter_clear(vmi); > > /* Point of no return */ > vms_complete_munmap_vmas(&vms, &mas_detach);