On Fri, Nov 22, 2024 at 4:41 PM Alice Ryhl <aliceryhl@xxxxxxxxxx> wrote: > This adds a type called VmAreaRef which is used when referencing a vma > that you have read access to. Here, read access means that you hold > either the mmap read lock or the vma read lock (or stronger). > > Additionally, a vma_lookup method is added to the mmap read guard, which > enables you to obtain a &VmAreaRef in safe Rust code. > > This patch only provides a way to lock the mmap read lock, but a > follow-up patch also provides a way to just lock the vma read lock. > > Acked-by: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> (for mm bits) > Signed-off-by: Alice Ryhl <aliceryhl@xxxxxxxxxx> Reviewed-by: Jann Horn <jannh@xxxxxxxxxx> with one comment: > + /// Zap pages in the given page range. > + /// > + /// This clears page table mappings for the range at the leaf level, leaving all other page > + /// tables intact, and freeing any memory referenced by the VMA in this range. That is, > + /// anonymous memory is completely freed, file-backed memory has its reference count on page > + /// cache folio's dropped, any dirty data will still be written back to disk as usual. > + #[inline] > + pub fn zap_page_range_single(&self, address: usize, size: usize) { > + // SAFETY: By the type invariants, the caller has read access to this VMA, which is > + // sufficient for this method call. This method has no requirements on the vma flags. Any > + // value of `address` and `size` is allowed. If we really want to allow any address and size, we might want to add an early bailout in zap_page_range_single(). The comment on top of zap_page_range_single() currently says "The range must fit into one VMA", and it looks like by the point we reach a bailout, we could have gone through an interval tree walk via mmu_notifier_invalidate_range_start()->__mmu_notifier_invalidate_range_start()->mn_itree_invalidate() for a range that ends before it starts; I don't know how safe that is. > + unsafe { > + bindings::zap_page_range_single( > + self.as_ptr(), > + address as _, > + size as _, > + core::ptr::null_mut(), > + ) > + }; > + } > +}