On Fri, Aug 24, 2018 at 08:05:46PM -0400, Zi Yan wrote: > Hi Jérôme, > > On 24 Aug 2018, at 15:25, jglisse@xxxxxxxxxx wrote: > > > From: Jérôme Glisse <jglisse@xxxxxxxxxx> > > > > Before this patch migration pmd entry (!pmd_present()) would have > > been treated as a bad entry (pmd_bad() returns true on migration > > pmd entry). The outcome was that device driver would believe that > > the range covered by the pmd was bad and would either SIGBUS or > > simply kill all the device's threads (each device driver decide > > how to react when the device tries to access poisonnous or invalid > > range of memory). > > > > This patch explicitly handle the case of migration pmd entry which > > are non present pmd entry and either wait for the migration to > > finish or report empty range (when device is just trying to pre- > > fill a range of virtual address and thus do not want to wait or > > trigger page fault). > > > > Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx> > > Signed-off-by: Jérôme Glisse <jglisse@xxxxxxxxxx> > > Cc: Ralph Campbell <rcampbell@xxxxxxxxxx> > > Cc: John Hubbard <jhubbard@xxxxxxxxxx> > > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > > --- > > mm/hmm.c | 45 +++++++++++++++++++++++++++++++++++++++------ > > 1 file changed, 39 insertions(+), 6 deletions(-) > > > > diff --git a/mm/hmm.c b/mm/hmm.c > > index a16678d08127..659efc9aada6 100644 > > --- a/mm/hmm.c > > +++ b/mm/hmm.c > > @@ -577,22 +577,47 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, > > { > > struct hmm_vma_walk *hmm_vma_walk = walk->private; > > struct hmm_range *range = hmm_vma_walk->range; > > + struct vm_area_struct *vma = walk->vma; > > uint64_t *pfns = range->pfns; > > unsigned long addr = start, i; > > pte_t *ptep; > > + pmd_t pmd; > > > > - i = (addr - range->start) >> PAGE_SHIFT; > > > > again: > > - if (pmd_none(*pmdp)) > > + pmd = READ_ONCE(*pmdp); > > + if (pmd_none(pmd)) > > return hmm_vma_walk_hole(start, end, walk); > > > > - if (pmd_huge(*pmdp) && (range->vma->vm_flags & VM_HUGETLB)) > > + if (pmd_huge(pmd) && (range->vma->vm_flags & VM_HUGETLB)) > > return hmm_pfns_bad(start, end, walk); > > > > - if (pmd_devmap(*pmdp) || pmd_trans_huge(*pmdp)) { > > - pmd_t pmd; > > + if (!pmd_present(pmd)) { > > + swp_entry_t entry = pmd_to_swp_entry(pmd); > > + > > + if (is_migration_entry(entry)) { > > I think you should check thp_migration_supported() here, since PMD migration is only enabled in x86_64 systems. > Other architectures should treat PMD migration entries as bad. You are right, Andrew do you want to repost or can you edit above if to: if (thp_migration_supported() && is_migration_entry(entry)) { Cheers, Jérôme