The patch titled Subject: secretmem: optimize page_is_secretmem() has been removed from the -mm tree. Its filename was mm-introduce-memfd_secret-system-call-to-create-secret-memory-areas-fix-3.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Mike Rapoport <rppt@xxxxxxxxxxxxx> Subject: secretmem: optimize page_is_secretmem() Kernel test robot reported -4.2% regression of will-it-scale.per_thread_ops due to commit "mm: introduce memfd_secret system call to create "secret" memory areas". The perf profile of the test indicated that the regression is caused by page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range): 27.76 +2.5 30.23 perf-profile.children.cycles-pp.gup_pgd_range 0.00 +3.2 3.19 2% perf-profile.children.cycles-pp.page_mapping 0.00 +3.7 3.66 2% perf-profile.children.cycles-pp.page_is_secretmem Further analysis showed that the slow down happens because neither page_is_secretmem() nor page_mapping() are not inline and moreover, multiple page flags checks in page_mapping() involve calling compound_head() several times for the same page. Make page_is_secretmem() inline and replace page_mapping() with page flag checks that do not imply page-to-head conversion. Link: https://lkml.kernel.org/r/20210420150049.14031-3-rppt@xxxxxxxxxx Signed-off-by: Mike Rapoport <rppt@xxxxxxxxxxxxx> Reported-by: kernel test robot <oliver.sang@xxxxxxxxx> Cc: Hagen Paul Pfeifer <hagen@xxxxxxxx> Cc: Alexander Viro <viro@xxxxxxxxxxxxxxxxxx> Cc: Andy Lutomirski <luto@xxxxxxxxxx> Cc: Arnd Bergmann <arnd@xxxxxxxx> Cc: Borislav Petkov <bp@xxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Christopher Lameter <cl@xxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: Elena Reshetova <elena.reshetova@xxxxxxxxx> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: James Bottomley <jejb@xxxxxxxxxxxxx> Cc: "Kirill A. Shutemov" <kirill@xxxxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mark Rutland <mark.rutland@xxxxxxx> Cc: Michael Kerrisk <mtk.manpages@xxxxxxxxx> Cc: Palmer Dabbelt <palmer@xxxxxxxxxxx> Cc: Palmer Dabbelt <palmerdabbelt@xxxxxxxxxx> Cc: Paul Walmsley <paul.walmsley@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Rick Edgecombe <rick.p.edgecombe@xxxxxxxxx> Cc: Roman Gushchin <guro@xxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Cc: Shuah Khan <shuah@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Tycho Andersen <tycho@xxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/secretmem.h | 26 +++++++++++++++++++++++++- mm/secretmem.c | 12 +----------- 2 files changed, 26 insertions(+), 12 deletions(-) --- a/include/linux/secretmem.h~mm-introduce-memfd_secret-system-call-to-create-secret-memory-areas-fix-3 +++ a/include/linux/secretmem.h @@ -4,8 +4,32 @@ #ifdef CONFIG_SECRETMEM +extern const struct address_space_operations secretmem_aops; + +static inline bool page_is_secretmem(struct page *page) +{ + struct address_space *mapping; + + /* + * Using page_mapping() is quite slow because of the actual call + * instruction and repeated compound_head(page) inside the + * page_mapping() function. + * We know that secretmem pages are not compound and LRU so we can + * save a couple of cycles here. + */ + if (PageCompound(page) || !PageLRU(page)) + return false; + + mapping = (struct address_space *) + ((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS); + + if (mapping != page->mapping) + return false; + + return page->mapping->a_ops == &secretmem_aops; +} + bool vma_is_secretmem(struct vm_area_struct *vma); -bool page_is_secretmem(struct page *page); #else --- a/mm/secretmem.c~mm-introduce-memfd_secret-system-call-to-create-secret-memory-areas-fix-3 +++ a/mm/secretmem.c @@ -137,22 +137,12 @@ static void secretmem_freepage(struct pa clear_highpage(page); } -static const struct address_space_operations secretmem_aops = { +const struct address_space_operations secretmem_aops = { .freepage = secretmem_freepage, .migratepage = secretmem_migratepage, .isolate_page = secretmem_isolate_page, }; -bool page_is_secretmem(struct page *page) -{ - struct address_space *mapping = page_mapping(page); - - if (!mapping) - return false; - - return mapping->a_ops == &secretmem_aops; -} - static struct vfsmount *secretmem_mnt; static struct file *secretmem_file_create(unsigned long flags) _ Patches currently in -mm which might be from rppt@xxxxxxxxxxxxx are mm-mmzoneh-simplify-is_highmem_idx.patch docs-procrst-meminfo-briefly-describe-gaps-in-memory-accounting.patch include-linux-mmzoneh-add-documentation-for-pfn_valid.patch memblock-update-initialization-of-reserved-pages.patch arm64-decouple-check-whether-pfn-is-in-linear-map-from-pfn_valid.patch arm64-drop-pfn_valid_within-and-simplify-pfn_valid.patch mmap-make-mlock_future_check-global.patch riscv-kconfig-make-direct-map-manipulation-options-depend-on-mmu.patch set_memory-allow-set_direct_map__noflush-for-multiple-pages.patch set_memory-allow-querying-whether-set_direct_map_-is-actually-enabled.patch mm-introduce-memfd_secret-system-call-to-create-secret-memory-areas.patch pm-hibernate-disable-when-there-are-active-secretmem-users.patch arch-mm-wire-up-memfd_secret-system-call-where-relevant.patch secretmem-test-add-basic-selftest-for-memfd_secret2.patch arch-mm-wire-up-memfd_secret-system-call-where-relevant-fix.patch