On Mon, Apr 19, 2021 at 11:15:02AM +0200, David Hildenbrand wrote: > On 19.04.21 10:42, Mike Rapoport wrote: > > From: Mike Rapoport <rppt@xxxxxxxxxxxxx> > > > > Kernel test robot reported -4.2% regression of will-it-scale.per_thread_ops > > due to commit "mm: introduce memfd_secret system call to create "secret" > > memory areas". > > > > The perf profile of the test indicated that the regression is caused by > > page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range): > > > > 27.76 +2.5 30.23 perf-profile.children.cycles-pp.gup_pgd_range > > 0.00 +3.2 3.19 ± 2% perf-profile.children.cycles-pp.page_mapping > > 0.00 +3.7 3.66 ± 2% perf-profile.children.cycles-pp.page_is_secretmem > > > > Further analysis showed that the slow down happens because neither > > page_is_secretmem() nor page_mapping() are not inline and moreover, > > multiple page flags checks in page_mapping() involve calling > > compound_head() several times for the same page. > > > > Make page_is_secretmem() inline and replace page_mapping() with page flag > > checks that do not imply page-to-head conversion. > > > > Reported-by: kernel test robot <oliver.sang@xxxxxxxxx> > > Signed-off-by: Mike Rapoport <rppt@xxxxxxxxxxxxx> > > --- > > > > @Andrew, > > The patch is vs v5.12-rc7-mmots-2021-04-15-16-28, I'd appreciate if it would > > be added as a fixup to the memfd_secret series. > > > > include/linux/secretmem.h | 26 +++++++++++++++++++++++++- > > mm/secretmem.c | 12 +----------- > > 2 files changed, 26 insertions(+), 12 deletions(-) > > > > diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h > > index 907a6734059c..b842b38cbeb1 100644 > > --- a/include/linux/secretmem.h > > +++ b/include/linux/secretmem.h > > @@ -4,8 +4,32 @@ > > #ifdef CONFIG_SECRETMEM > > +extern const struct address_space_operations secretmem_aops; > > + > > +static inline bool page_is_secretmem(struct page *page) > > +{ > > + struct address_space *mapping; > > + > > + /* > > + * Using page_mapping() is quite slow because of the actual call > > + * instruction and repeated compound_head(page) inside the > > + * page_mapping() function. > > + * We know that secretmem pages are not compound and LRU so we can > > + * save a couple of cycles here. > > + */ > > + if (PageCompound(page) || !PageLRU(page)) > > + return false; > > I'd assume secretmem pages are rare in basically every setup out there. So > maybe throwing in a couple of likely()/unlikely() might make sense. I'd say we could do unlikely(page_is_secretmem()) at call sites. Here I can hardly estimate which pages are going to be checked. > > + > > + mapping = (struct address_space *) > > + ((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS); > > + > > Not sure if open-coding page_mapping is really a good idea here -- or even > necessary after the fast path above is in place. Anyhow, just my 2 cents. Well, most if the -4.2% of the performance regression kbuild reported were due to repeated compount_head(page) in page_mapping(). So the whole point of this patch is to avoid calling page_mapping(). > The idea of the patch makes sense to me. -- Sincerely yours, Mike.