The patch titled Subject: mm-thp-handle-page-cache-thp-correctly-in-pagetranscompoundmap-v4 has been removed from the -mm tree. Its filename was mm-thp-handle-page-cache-thp-correctly-in-pagetranscompoundmap-v4.patch This patch was dropped because it was folded into mm-thp-handle-page-cache-thp-correctly-in-pagetranscompoundmap.patch ------------------------------------------------------ From: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> Subject: mm-thp-handle-page-cache-thp-correctly-in-pagetranscompoundmap-v4 Link: http://lkml.kernel.org/r/1571865575-42913-1-git-send-email-yang.shi@xxxxxxxxxxxxxxxxx Fixes: dd78fedde4b9 ("rmap: support file thp") Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> Reported-by: Gang Deng <gavin.dg@xxxxxxxxxxxxxxxxx> Tested-by: Gang Deng <gavin.dg@xxxxxxxxxxxxxxxxx> Suggested-by: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mm.h | 5 -- include/linux/mm_types.h | 5 ++ include/linux/page-flags.h | 70 ++++++++++++++++++----------------- 3 files changed, 42 insertions(+), 38 deletions(-) --- a/include/linux/mm.h~mm-thp-handle-page-cache-thp-correctly-in-pagetranscompoundmap-v4 +++ a/include/linux/mm.h @@ -695,11 +695,6 @@ static inline void *kvcalloc(size_t n, s extern void kvfree(const void *addr); -static inline atomic_t *compound_mapcount_ptr(struct page *page) -{ - return &page[1].compound_mapcount; -} - static inline int compound_mapcount(struct page *page) { VM_BUG_ON_PAGE(!PageCompound(page), page); --- a/include/linux/mm_types.h~mm-thp-handle-page-cache-thp-correctly-in-pagetranscompoundmap-v4 +++ a/include/linux/mm_types.h @@ -221,6 +221,11 @@ struct page { #endif } _struct_page_alignment; +static inline atomic_t *compound_mapcount_ptr(struct page *page) +{ + return &page[1].compound_mapcount; +} + /* * Used for sizing the vmemmap region on some architectures */ --- a/include/linux/page-flags.h~mm-thp-handle-page-cache-thp-correctly-in-pagetranscompoundmap-v4 +++ a/include/linux/page-flags.h @@ -610,6 +610,43 @@ static inline int PageTransCompound(stru } /* + * PageTransCompoundMap is the same as PageTransCompound, but it also + * guarantees the primary MMU has the entire compound page mapped + * through pmd_trans_huge, which in turn guarantees the secondary MMUs + * can also map the entire compound page. This allows the secondary + * MMUs to call get_user_pages() only once for each compound page and + * to immediately map the entire compound page with a single secondary + * MMU fault. If there will be a pmd split later, the secondary MMUs + * will get an update through the MMU notifier invalidation through + * split_huge_pmd(). + * + * Unlike PageTransCompound, this is safe to be called only while + * split_huge_pmd() cannot run from under us, like if protected by the + * MMU notifier, otherwise it may result in page->_mapcount check false + * positives. + * + * We have to treat page cache THP differently since every subpage of it + * would get _mapcount inc'ed once it is PMD mapped. But, it may be PTE + * mapped in the current process so comparing subpage's _mapcount to + * compound_mapcount to filter out PTE mapped case. + */ +static inline int PageTransCompoundMap(struct page *page) +{ + struct page *head; + + if (!PageTransCompound(page)) + return 0; + + if (PageAnon(page)) + return atomic_read(&page->_mapcount) < 0; + + head = compound_head(page); + /* File THP is PMD mapped and not PTE mapped */ + return atomic_read(&page->_mapcount) == + atomic_read(compound_mapcount_ptr(head)); +} + +/* * PageTransTail returns true for both transparent huge pages * and hugetlbfs pages, so it should only be called when it's known * that hugetlbfs pages aren't involved. @@ -660,39 +697,6 @@ static inline int TestClearPageDoubleMap return test_and_clear_bit(PG_double_map, &page[1].flags); } -/* - * PageTransCompoundMap is the same as PageTransCompound, but it also - * guarantees the primary MMU has the entire compound page mapped - * through pmd_trans_huge, which in turn guarantees the secondary MMUs - * can also map the entire compound page. This allows the secondary - * MMUs to call get_user_pages() only once for each compound page and - * to immediately map the entire compound page with a single secondary - * MMU fault. If there will be a pmd split later, the secondary MMUs - * will get an update through the MMU notifier invalidation through - * split_huge_pmd(). - * - * Unlike PageTransCompound, this is safe to be called only while - * split_huge_pmd() cannot run from under us, like if protected by the - * MMU notifier, otherwise it may result in page->_mapcount check false - * positives. - * - * We have to treat page cache THP differently since every subpage of it - * would get _mapcount inc'ed once it is PMD mapped. But, it may be PTE - * mapped in the current process so checking PageDoubleMap flag to rule - * this out. - */ -static inline int PageTransCompoundMap(struct page *page) -{ - bool pmd_mapped; - - if (PageAnon(page)) - pmd_mapped = atomic_read(&page->_mapcount) < 0; - else - pmd_mapped = atomic_read(&page->_mapcount) >= 0 && - !PageDoubleMap(compound_head(page)); - - return PageTransCompound(page) && pmd_mapped; -} #else TESTPAGEFLAG_FALSE(TransHuge) TESTPAGEFLAG_FALSE(TransCompound) _ Patches currently in -mm which might be from yang.shi@xxxxxxxxxxxxxxxxx are mm-thp-handle-page-cache-thp-correctly-in-pagetranscompoundmap.patch mm-mempolicy-fix-the-wrong-return-value-and-potential-pages-leak-of-mbind.patch mm-rmap-use-vm_bug_on-in-__page_check_anon_rmap.patch mm-vmscan-remove-unused-scan_control-parameter-from-pageout.patch