On 2022/3/11 12:51, Anshuman Khandual wrote: > Hi Miaohe, > > On 3/10/22 18:42, Miaohe Lin wrote: >> We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify >> the code a bit. >> >> Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> >> --- >> mm/huge_memory.c | 4 +--- >> 1 file changed, 1 insertion(+), 3 deletions(-) >> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index 3557aabe86fe..418d077da246 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, >> */ >> for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) { >> struct vm_area_struct *vma = find_vma(mm, addr); >> - unsigned int follflags; >> struct page *page; >> >> if (!vma || addr < vma->vm_start) >> @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, >> } >> >> /* FOLL_DUMP to ignore special (like zero) pages */ >> - follflags = FOLL_GET | FOLL_DUMP; >> - page = follow_page(vma, addr, follflags); >> + page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); >> >> if (IS_ERR(page)) >> continue; > > LGTM, but there is another similar instance in add_page_for_migration() > inside mm/migrate.c, requiring this exact clean up. > Thanks for comment. That similar case is done in my previous patch series[1] aimed at migration cleanup and fixup. It might be more suitable to do that clean up in that specialized series? [1]:https://lore.kernel.org/linux-mm/20220304093409.25829-4-linmiaohe@xxxxxxxxxx/ > Hence with that change in place. > > Reviewed-by: Anshuman Khandual <anshuman.khandual@xxxxxxx> Thanks again. > . >