Dave Jiang reported that he was seeing oopses when running NUMA systems and default_hugepagesz=1G. I traced the issue down to migrate_page_copy() trying to use the same code for hugetlb pages and transparent hugepages. It should not have been trying to pass thp pages in there. So, add some VM_BUG_ON()s for the next hapless VM developer that tries the same thing. --- linux.git-davehans/include/linux/hugetlb.h | 1 + linux.git-davehans/mm/hugetlb.c | 1 + 2 files changed, 2 insertions(+) diff -puN include/linux/hugetlb.h~bug-not-hugetlbfs-in-copy_huge_page include/linux/hugetlb.h --- linux.git/include/linux/hugetlb.h~bug-not-hugetlbfs-in-copy_huge_page 2013-10-28 15:06:12.888828815 -0700 +++ linux.git-davehans/include/linux/hugetlb.h 2013-10-28 15:06:12.893829038 -0700 @@ -355,6 +355,7 @@ static inline pte_t arch_make_huge_pte(p static inline struct hstate *page_hstate(struct page *page) { + VM_BUG_ON(!PageHuge(page)); return size_to_hstate(PAGE_SIZE << compound_order(page)); } diff -puN mm/hugetlb.c~bug-not-hugetlbfs-in-copy_huge_page mm/hugetlb.c --- linux.git/mm/hugetlb.c~bug-not-hugetlbfs-in-copy_huge_page 2013-10-28 15:06:12.890828904 -0700 +++ linux.git-davehans/mm/hugetlb.c 2013-10-28 15:06:12.894829082 -0700 @@ -498,6 +498,7 @@ void copy_huge_page(struct page *dst, st int i; struct hstate *h = page_hstate(src); + VM_BUG_ON(!h); if (unlikely(pages_per_huge_page(h) > MAX_ORDER_NR_PAGES)) { copy_gigantic_page(dst, src); return; _ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>