On Fri, Feb 28, 2020 at 8:38 AM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > On Thu, Feb 27, 2020 at 09:47:14PM -0800, Arjun Roy wrote: > > diff --git a/mm/memory.c b/mm/memory.c > > index d6f834f7d145..47b28fcc73c2 100644 > > +++ b/mm/memory.c > > @@ -1460,18 +1460,6 @@ static int insert_page_into_pte_locked(struct mm_struct *mm, pte_t *pte, > > return 0; > > } > > > > -static int insert_page_in_batch_locked(struct mm_struct *mm, pmd_t *pmd, > > - unsigned long addr, struct page *page, pgprot_t prot) > > -{ > > - int err; > > - > > - if (!page_count(page)) > > - return -EINVAL; > > - err = validate_page_before_insert(page); > > - return err ? err : insert_page_into_pte_locked( > > - mm, pte_offset_map(pmd, addr), addr, page, prot); > > -} > > - > > /* > > * This is the old fallback for page remapping. > > * > > @@ -1500,8 +1488,21 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr, > > return retval; > > } > > > > +#ifdef pte_index > > It seems a bit weird like this, don't we usually do this kind of stuff > with some CONFIG_ARCH_HAS_XX thing? > > IMHO all arches should implement pte_index as the static inline, that > has been the general direction lately. Based on a comment from Stephen Rothwell, we found out that "static inline" functions are only used in tile and x86. That's why Arjun opted for this method to create a smaller patch-series to fix the build breakage. Thanks, Soheil > Jason