On Wed 10-02-21 17:57:29, Michal Hocko wrote: > On Wed 10-02-21 16:18:50, Vlastimil Babka wrote: [...] > > And the munlock (munlock_vma_pages_range()) is slow, because it uses > > follow_page_mask() in a loop incrementing addresses by PAGE_SIZE, so that's > > always traversing all levels of page tables from scratch. Funnily enough, > > speeding this up was my first linux-mm series years ago. But the speedup only > > works if pte's are present, which is not the case for unpopulated PROT_NONE > > areas. That use case was unexpected back then. We should probably convert this > > code to a proper page table walk. If there are large areas with unpopulated pmd > > entries (or even higher levels) we would traverse them very quickly. > > Yes, this is a good idea. I suspect it will be little bit tricky without > duplicating a large part of gup page table walker. Thinking about it some more, unmap_page_range would be a better model for this operation. -- Michal Hocko SUSE Labs