On Tue 03-01-23 16:27:32, Mike Kravetz wrote: > zap_page_range was originally designed to unmap pages within an address > range that could span multiple vmas. While working on [1], it was > discovered that all callers of zap_page_range pass a range entirely within > a single vma. In addition, the mmu notification call within zap_page > range does not correctly handle ranges that span multiple vmas. When > crossing a vma boundary, a new mmu_notifier_range_init/end call pair > with the new vma should be made. > > Instead of fixing zap_page_range, do the following: > - Create a new routine zap_vma_pages() that will remove all pages within > the passed vma. Most users of zap_page_range pass the entire vma and > can use this new routine. > - For callers of zap_page_range not passing the entire vma, instead call > zap_page_range_single(). > - Remove zap_page_range. > > [1] https://lore.kernel.org/linux-mm/20221114235507.294320-2-mike.kravetz@xxxxxxxxxx/ > Suggested-by: Peter Xu <peterx@xxxxxxxxxx> > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> This looks even better than the previous version. Acked-by: Michal Hocko <mhocko@xxxxxxxx> minor nit [...] > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > index ad608ef2a243..ffa36cfe5884 100644 > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -2713,7 +2713,7 @@ void folio_account_cleaned(struct folio *folio, struct bdi_writeback *wb) > * > * The caller must hold lock_page_memcg(). Most callers have the folio > * locked. A few have the folio blocked from truncation through other > - * means (eg zap_page_range() has it mapped and is holding the page table > + * means (eg zap_vma_pages() has it mapped and is holding the page table > * lock). This can also be called from mark_buffer_dirty(), which I > * cannot prove is always protected against truncate. strictly speaking this should be unmap_page_range -- Michal Hocko SUSE Labs