There are two situations: 1) After retaking the mmap lock, the next VMA expands downwards. 2) After khugepaged sleeps and starts again, it will pick up the starting address from the global struct khugepaged_scan, and hence will pick up the same VMA as in the previous cycle. In both cases, khugepaged_scan.address > hstart. Therefore, explicitly align the address to the order we are scanning for. Previously this was not a problem since the alignment was to be always PMD-aligned. Signed-off-by: Dev Jain <dev.jain@xxxxxxx> --- mm/khugepaged.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index e1c2c5b89f6d..7c9a758f6817 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2722,6 +2722,9 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, hstart = round_up(vma->vm_start, PAGE_SIZE << order); if (khugepaged_scan.address < hstart) khugepaged_scan.address = hstart; + else + khugepaged_scan.address = round_down(khugepaged_scan.address, PAGE_SIZE << order); + VM_BUG_ON(khugepaged_scan.address & ((PAGE_SIZE << order) - 1)); while (khugepaged_scan.address < hend) { -- 2.30.2