Qi Zheng <qi.zheng@xxxxxxxxx> writes: > On 2023/5/22 13:05, Hugh Dickins wrote: >> hmm_vma_walk_pmd() is called through mm_walk, but already has a goto >> again loop of its own, so take part in that if pte_offset_map() fails. >> Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> >> --- >> mm/hmm.c | 2 ++ >> 1 file changed, 2 insertions(+) >> diff --git a/mm/hmm.c b/mm/hmm.c >> index e23043345615..b1a9159d7c92 100644 >> --- a/mm/hmm.c >> +++ b/mm/hmm.c >> @@ -381,6 +381,8 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, >> } >> ptep = pte_offset_map(pmdp, addr); >> + if (!ptep) >> + goto again; >> for (; addr < end; addr += PAGE_SIZE, ptep++, hmm_pfns++) { >> int r; >> > > I haven't read the entire patch set yet, but taking a note here. > The hmm_vma_handle_pte() will unmap pte and then call > migration_entry_wait() to remap pte, so this may fail, we need to > handle this case like below: I don't see a problem here. Sure, hmm_vma_handle_pte() might return -EBUSY but that will get returned up to hmm_range_fault() which will retry the whole thing again and presumably fail when looking at the PMD. > diff --git a/mm/hmm.c b/mm/hmm.c > index 6a151c09de5e..eb726ff0981c 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -276,7 +276,8 @@ static int hmm_vma_handle_pte(struct mm_walk > *walk, unsigned long addr, > if (is_migration_entry(entry)) { > pte_unmap(ptep); > hmm_vma_walk->last = addr; > - migration_entry_wait(walk->mm, pmdp, addr); > + if (!migration_entry_wait(walk->mm, pmdp, addr)) > + return -EAGAIN; > return -EBUSY; > } > > @@ -386,6 +387,8 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, > > r = hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, > hmm_pfns); > if (r) { > + if (r == -EAGAIN) > + goto again; > /* hmm_vma_handle_pte() did pte_unmap() */ > return r; > } > > Of course, the migration_entry_wait() also needs to be modified.