queue_pages_pmd_range() checks pmd_huge() to find hugepage, but this check assumes the pmd is in the normal format and does not work on migration entry whoes format is like swap entry. We can distinguish them with present bit, so we need to check it before cheking pmd_huge(). Otherwise, pmd_huge() can wrongly return false for hugepage, and the behavior is unpredictable. This patch is against mmotm-2013-08-27. Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> --- mm/mempolicy.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 64d00c4..0472964 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -553,6 +553,8 @@ static inline int queue_pages_pmd_range(struct vm_area_struct *vma, pud_t *pud, pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); + if (!pmd_present(*pmd)) + continue; if (pmd_huge(*pmd) && is_vm_hugetlb_page(vma)) { queue_pages_hugetlb_pmd_range(vma, pmd, nodes, flags, private); -- 1.8.3.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>