On 04/23/2015 11:03 PM, Kirill A. Shutemov wrote: > With new refcounting we are going to see THP tail pages mapped with PTE. > Generic fast GUP rely on page_cache_get_speculative() to obtain > reference on page. page_cache_get_speculative() always fails on tail > pages, because ->_count on tail pages is always zero. > > Let's handle tail pages in gup_pte_range(). > > New split_huge_page() will rely on migration entries to freeze page's > counts. Recheck PTE value after page_cache_get_speculative() on head > page should be enough to serialize against split. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > Tested-by: Sasha Levin <sasha.levin@xxxxxxxxxx> Acked-by: Jerome Marchand <jmarchan@xxxxxxxxxx> > --- > mm/gup.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index ebdb39b3e820..eaeeae15006b 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1051,7 +1051,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > * for an example see gup_get_pte in arch/x86/mm/gup.c > */ > pte_t pte = READ_ONCE(*ptep); > - struct page *page; > + struct page *head, *page; > > /* > * Similar to the PMD case below, NUMA hinting must take slow > @@ -1063,15 +1063,17 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > > VM_BUG_ON(!pfn_valid(pte_pfn(pte))); > page = pte_page(pte); > + head = compound_head(page); > > - if (!page_cache_get_speculative(page)) > + if (!page_cache_get_speculative(head)) > goto pte_unmap; > > if (unlikely(pte_val(pte) != pte_val(*ptep))) { > - put_page(page); > + put_page(head); > goto pte_unmap; > } > > + VM_BUG_ON_PAGE(compound_head(page) != head, page); > pages[*nr] = page; > (*nr)++; > >
Attachment:
signature.asc
Description: OpenPGP digital signature