On Mon, 19 Aug 2013 14:23:42 +0200 Vlastimil Babka <vbabka@xxxxxxx> wrote: > Currently munlock_vma_pages_range() calls follow_page_mask() to obtain each > struct page. This entails repeated full page table translations and page table > lock taken for each page separately. > > This patch attempts to avoid the costly follow_page_mask() where possible, by > iterating over ptes within single pmd under single page table lock. The first > pte is obtained by get_locked_pte() for non-THP page acquired by the initial > follow_page_mask(). The latter function is also used as a fallback in case > simple pte_present() and vm_normal_page() are not sufficient to obtain the > struct page. Patch #7 appears to provide significant performance gains, but the improvement wasn't individually described here, unlike the other patches. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>