On Tue, 17 Sep 2024 19:36:47 +0800 kernel test robot <lkp@xxxxxxxxx> wrote: > tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master > head: 4f3e012d4cfd1d9bf837870c961f462ca9f23ebe > commit: 8b07e88e36961c4785dd13dbdbb5d7977b458940 [12061/12283] mm: shmem: fix khugepaged activation policy for shmem > config: loongarch-randconfig-s042-20221209 (https://download.01.org/0day-ci/archive/20240917/202409171905.9gqKNeeL-lkp@xxxxxxxxx/config) > compiler: loongarch64-linux-gcc (GCC) 13.3.0 > reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240917/202409171905.9gqKNeeL-lkp@xxxxxxxxx/reproduce) > > If you fix the issue in a separate patch/commit (i.e. not just a new version of > the same patch/commit), kindly add following tags > | Reported-by: kernel test robot <lkp@xxxxxxxxx> > | Closes: https://lore.kernel.org/oe-kbuild-all/202409171905.9gqKNeeL-lkp@xxxxxxxxx/ > > All errors (new ones prefixed by >>): > > loongarch64-linux-ld: mm/khugepaged.o: in function `hugepage_pmd_enabled': > >> mm/khugepaged.c:435:(.text+0x2338): undefined reference to `shmem_hpage_pmd_enabled' Thanks. I think this: From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Subject: mm-shmem-fix-khugepaged-activation-policy-for-shmem-fix Date: Tue Sep 17 05:06:48 AM PDT 2024 fix build with CONFIG_SHMEM=n Reported-by: kernel test robot <lkp@xxxxxxxxx> Closes: https://lore.kernel.org/oe-kbuild-all/202409171905.9gqKNeeL-lkp@xxxxxxxxx/ Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/shmem_fs.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) --- a/include/linux/shmem_fs.h~mm-shmem-fix-khugepaged-activation-policy-for-shmem-fix +++ a/include/linux/shmem_fs.h @@ -114,7 +114,6 @@ int shmem_unuse(unsigned int type); unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, loff_t write_end, bool shmem_huge_force); -bool shmem_hpage_pmd_enabled(void); #else static inline unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, @@ -123,6 +122,11 @@ static inline unsigned long shmem_allowa return 0; } +#endif + +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(SHMEM) +bool shmem_hpage_pmd_enabled(void); +#else static inline bool shmem_hpage_pmd_enabled(void) { return false; _