I don't understand this code, so I can't review, but. On 07/29, Song Liu wrote: > > This patches introduces a new foll_flag: FOLL_SPLIT_PMD. As the name says > FOLL_SPLIT_PMD splits huge pmd for given mm_struct, the underlining huge > page stays as-is. > > FOLL_SPLIT_PMD is useful for cases where we need to use regular pages, > but would switch back to huge page and huge pmd on. One of such example > is uprobe. The following patches use FOLL_SPLIT_PMD in uprobe. So after the next patch we have a single user of FOLL_SPLIT_PMD (uprobes) and a single user of FOLL_SPLIT: arch/s390/mm/gmap.c:thp_split_mm(). Hmm. > @@ -399,7 +399,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, > spin_unlock(ptl); > return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); > } > - if (flags & FOLL_SPLIT) { > + if (flags & (FOLL_SPLIT | FOLL_SPLIT_PMD)) { > int ret; > page = pmd_page(*pmd); > if (is_huge_zero_page(page)) { > @@ -408,7 +408,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, > split_huge_pmd(vma, pmd, address); > if (pmd_trans_unstable(pmd)) > ret = -EBUSY; > - } else { > + } else if (flags & FOLL_SPLIT) { > if (unlikely(!try_get_page(page))) { > spin_unlock(ptl); > return ERR_PTR(-ENOMEM); > @@ -420,6 +420,10 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, > put_page(page); > if (pmd_none(*pmd)) > return no_page_table(vma, flags); > + } else { /* flags & FOLL_SPLIT_PMD */ > + spin_unlock(ptl); > + split_huge_pmd(vma, pmd, address); > + ret = pte_alloc(mm, pmd); I fail to understand why this differs from the is_huge_zero_page() case above. Anyway, ret = pte_alloc(mm, pmd) can't be correct. If __pte_alloc() fails pte_alloc() will return 1. This will fool the IS_ERR(page) check in __get_user_pages(). Oleg.