The patch titled Subject: mm/khugepaged: fix the xas_create_range() error path has been added to the -mm tree. Its filename is mm-khugepaged-fix-the-xas_create_range-error-path.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-khugepaged-fix-the-xas_create_range-error-path.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-khugepaged-fix-the-xas_create_range-error-path.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Hugh Dickins <hughd@xxxxxxxxxx> Subject: mm/khugepaged: fix the xas_create_range() error path collapse_shmem()'s xas_nomem() is very unlikely to fail, but it is rightly given a failure path, so move the whole xas_create_range() block up before __SetPageLocked(new_page): so that it does not need to remember to unlock_page(new_page). Add the missing mem_cgroup_cancel_charge(), and set (currently unused) result to SCAN_FAIL rather than SCAN_SUCCEED. Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1811261531200.2275@eggly.anvils Fixes: 77da9389b9d5 ("mm: Convert collapse_shmem to XArray") Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Jerome Glisse <jglisse@xxxxxxxxxx> Cc: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/mm/khugepaged.c~mm-khugepaged-fix-the-xas_create_range-error-path +++ a/mm/khugepaged.c @@ -1329,6 +1329,20 @@ static void collapse_shmem(struct mm_str goto out; } + /* This will be less messy when we use multi-index entries */ + do { + xas_lock_irq(&xas); + xas_create_range(&xas); + if (!xas_error(&xas)) + break; + xas_unlock_irq(&xas); + if (!xas_nomem(&xas, GFP_KERNEL)) { + mem_cgroup_cancel_charge(new_page, memcg, true); + result = SCAN_FAIL; + goto out; + } + } while (1); + __SetPageLocked(new_page); __SetPageSwapBacked(new_page); new_page->index = start; @@ -1340,17 +1354,6 @@ static void collapse_shmem(struct mm_str * be able to map it or use it in another way until we unlock it. */ - /* This will be less messy when we use multi-index entries */ - do { - xas_lock_irq(&xas); - xas_create_range(&xas); - if (!xas_error(&xas)) - break; - xas_unlock_irq(&xas); - if (!xas_nomem(&xas, GFP_KERNEL)) - goto out; - } while (1); - xas_set(&xas, start); for (index = start; index < end; index++) { struct page *page = xas_next(&xas); _ Patches currently in -mm which might be from hughd@xxxxxxxxxx are mm-huge_memory-rename-freeze_page-to-unmap_page.patch mm-huge_memory-splitting-set-mappingindex-before-unfreeze.patch mm-huge_memory-fix-lockdep-complaint-on-32-bit-i_size_read.patch mm-khugepaged-collapse_shmem-stop-if-punched-or-truncated.patch mm-khugepaged-fix-crashes-due-to-misaccounted-holes.patch mm-khugepaged-collapse_shmem-remember-to-clear-holes.patch mm-khugepaged-minor-reorderings-in-collapse_shmem.patch mm-khugepaged-collapse_shmem-without-freezing-new_page.patch mm-khugepaged-collapse_shmem-do-not-crash-on-compound.patch mm-khugepaged-fix-the-xas_create_range-error-path.patch mm-put_and_wait_on_page_locked-while-page-is-migrated.patch