On Thu, Feb 23, 2012 at 5:06 AM, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > Perhaps add a little comment to this explaining what's going on? > > > It would be sufficient to do > > if (ref_page) > break; > > This is more efficient, and doesn't make people worry about whether > this value of `page' is the same as the one which > pte_page(huge_ptep_get()) earlier returned. > Hi Andrew It is re-prepared, ===cut here=== From: Hillf Danton <dhillf@xxxxxxxxx> Subject: [PATCH] mm: hugetlb: bail out unmapping after serving reference page When unmapping given VM range, we could bail out if a reference page is supplied and is unmapped, which is a minor optimization. Signed-off-by: Hillf Danton <dhillf@xxxxxxxxx> --- --- a/mm/hugetlb.c Wed Feb 22 19:34:12 2012 +++ b/mm/hugetlb.c Thu Feb 23 20:13:06 2012 @@ -2280,6 +2280,10 @@ void __unmap_hugepage_range(struct vm_ar if (pte_dirty(pte)) set_page_dirty(page); list_add(&page->lru, &page_list); + + /* Bail out after unmapping reference page if supplied */ + if (ref_page) + break; } spin_unlock(&mm->page_table_lock); flush_tlb_range(vma, start, end); -- > Why do we evaluate `page' twice inside that loop anyway? And why do we > check for huge_pte_none() twice? It looks all messed up. > and a follow-up cleanup also attached. Thanks Hillf ===cut here=== From: Hillf Danton <dhillf@xxxxxxxxx> Subject: [PATCH] mm: hugetlb: cleanup duplicated code in unmapping vm range When unmapping given VM range, a couple of code duplicate, such as pte_page() and huge_pte_none(), so a cleanup needed to compact them together. Signed-off-by: Hillf Danton <dhillf@xxxxxxxxx> --- --- a/mm/hugetlb.c Thu Feb 23 20:13:06 2012 +++ b/mm/hugetlb.c Thu Feb 23 20:30:16 2012 @@ -2245,16 +2245,23 @@ void __unmap_hugepage_range(struct vm_ar if (huge_pmd_unshare(mm, &address, ptep)) continue; + pte = huge_ptep_get(ptep); + if (huge_pte_none(pte)) + continue; + + /* + * HWPoisoned hugepage is already unmapped and dropped reference + */ + if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) + continue; + + page = pte_page(pte); /* * If a reference page is supplied, it is because a specific * page is being unmapped, not a range. Ensure the page we * are about to unmap is the actual page of interest. */ if (ref_page) { - pte = huge_ptep_get(ptep); - if (huge_pte_none(pte)) - continue; - page = pte_page(pte); if (page != ref_page) continue; @@ -2267,16 +2274,6 @@ void __unmap_hugepage_range(struct vm_ar } pte = huge_ptep_get_and_clear(mm, address, ptep); - if (huge_pte_none(pte)) - continue; - - /* - * HWPoisoned hugepage is already unmapped and dropped reference - */ - if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) - continue; - - page = pte_page(pte); if (pte_dirty(pte)) set_page_dirty(page); list_add(&page->lru, &page_list); -- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href