On Tue, Jul 07, 2020 at 09:38:56AM +0800, Wei Yang wrote: > On Mon, Jul 06, 2020 at 01:07:29PM +0300, Kirill A. Shutemov wrote: > >On Fri, Jun 26, 2020 at 09:52:15PM +0800, Wei Yang wrote: > >> Page tables is moved on the base of PMD. This requires both source > >> and destination range should meet the requirement. > >> > >> Current code works well since move_huge_pmd() and move_normal_pmd() > >> would check old_addr and new_addr again. And then return to move_ptes() > >> if the either of them is not aligned. > >> > >> In stead of calculating the extent separately, it is better to calculate > >> in one place, so we know it is not necessary to try move pmd. By doing > >> so, the logic seems a little clear. > >> > >> Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx> > >> Tested-by: Dmitry Osipenko <digetx@xxxxxxxxx> > >> --- > >> mm/mremap.c | 6 +++--- > >> 1 file changed, 3 insertions(+), 3 deletions(-) > >> > >> diff --git a/mm/mremap.c b/mm/mremap.c > >> index de27b12c8a5a..a30b3e86cc99 100644 > >> --- a/mm/mremap.c > >> +++ b/mm/mremap.c > >> @@ -258,6 +258,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma, > >> extent = next - old_addr; > >> if (extent > old_end - old_addr) > >> extent = old_end - old_addr; > >> + next = (new_addr + PMD_SIZE) & PMD_MASK; > > > >Please use round_up() for both 'next' calculations. > > > > I took another close look into this, seems this is not a good suggestion. > > round_up(new_addr, PMD_SIZE) > > would be new_addr when new_addr is PMD_SIZE aligned, which is not what we > expect. Maybe round_down(new_addr + PMD_SIZE, PMD_SIZE)? -- Kirill A. Shutemov