On 11/02/22 15:24, Peter Xu wrote: > On Sun, Oct 30, 2022 at 06:44:10PM -0700, Mike Kravetz wrote: > > On 10/30/22 11:52, Nadav Amit wrote: > > > On Oct 30, 2022, at 11:43 AM, Peter Xu <peterx@xxxxxxxxxx> wrote: > > > > > > > The loop comes from 7e027b14d53e ("vm: simplify unmap_vmas() calling > > > > convention", 2012-05-06), where zap_page_range() was used to replace a call > > > > to unmap_vmas() because the patch wanted to eliminate the zap details > > > > pointer for unmap_vmas(), which makes sense. > > > > > > > > I didn't check the old code, but from what I can tell (and also as Mike > > > > pointed out) I don't think zap_page_range() in the lastest code base is > > > > ever used on multi-vma at all. Otherwise the mmu notifier is already > > > > broken - see mmu_notifier_range_init() where the vma pointer is also part > > > > of the notification. > > > > > > > > Perhaps we should just remove the loop? > > > > > > There is already zap_page_range_single() that does exactly that. Just need > > > to export it. > > > > I was thinking that zap_page_range() should perform a notification call for > > each vma within the loop. Something like this? > > I'm boldly guessing what Nadav suggested was using zap_page_range_single() > and export it for MADV_DONTNEED. Hopefully that's also the easiest for > stable? I started making this change, then noticed that zap_vma_ptes() just calls zap_page_range_single(). And, it is already exported. That may be a better fit since exporting zap_page_range_single would require a wrapper as I do not think we want to export struct zap_details as well. In any case, we still need to add the adjust_range_if_pmd_sharing_possible() call to zap_page_range_single. > > For the long term, I really think we should just get rid of the loop.. > Yes. It will look a little strange if adjust_range_if_pmd_sharing_possible is added to zap_page_range_single but not zap_page_range. And, to properly add it to zap_page_range means rewriting the routine as I did here: https://lore.kernel.org/linux-mm/20221102013100.455139-1-mike.kravetz@xxxxxxxxxx/ -- Mike Kravetz