Re: [PATCH v5 1/4] mm: arch: remove indirection level in alloc_zeroed_user_highpage_movable()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Peter,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on arm64/for-next/core]
[also build test ERROR on m68knommu/for-next s390/features tip/x86/core tip/perf/core linus/master v5.13-rc4 next-20210601]
[cannot apply to hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Peter-Collingbourne/arm64-improve-efficiency-of-setting-tags-for-user-pages/20210602-035317
base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
config: arm64-allyesconfig (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/1344809b8a7ee8c81147702ffae35c577aab33ba
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Peter-Collingbourne/arm64-improve-efficiency-of-setting-tags-for-user-pages/20210602-035317
        git checkout 1344809b8a7ee8c81147702ffae35c577aab33ba
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@xxxxxxxxx>

Note: the linux-review/Peter-Collingbourne/arm64-improve-efficiency-of-setting-tags-for-user-pages/20210602-035317 HEAD ead2e307c4f44ebc1cfe727a2bfc28ceec0bc4e9 builds fine.
      It only hurts bisectibility.

All errors (new ones prefixed by >>):

   mm/memory.c: In function 'wp_page_copy':
>> mm/memory.c:2892:26: error: macro "alloc_zeroed_user_highpage_movable" requires 3 arguments, but only 2 given
    2892 |              vmf->address);
         |                          ^
   In file included from include/linux/shm.h:6,
                    from include/linux/sched.h:16,
                    from include/linux/hardirq.h:9,
                    from include/linux/interrupt.h:11,
                    from include/linux/kernel_stat.h:9,
                    from mm/memory.c:42:
   arch/arm64/include/asm/page.h:31: note: macro "alloc_zeroed_user_highpage_movable" defined here
      31 | #define alloc_zeroed_user_highpage_movable(movableflags, vma, vaddr) \
         | 
>> mm/memory.c:2891:14: error: 'alloc_zeroed_user_highpage_movable' undeclared (first use in this function)
    2891 |   new_page = alloc_zeroed_user_highpage_movable(vma,
         |              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   mm/memory.c:2891:14: note: each undeclared identifier is reported only once for each function it appears in
   mm/memory.c: In function 'do_anonymous_page':
   mm/memory.c:3589:61: error: macro "alloc_zeroed_user_highpage_movable" requires 3 arguments, but only 2 given
    3589 |  page = alloc_zeroed_user_highpage_movable(vma, vmf->address);
         |                                                             ^
   In file included from include/linux/shm.h:6,
                    from include/linux/sched.h:16,
                    from include/linux/hardirq.h:9,
                    from include/linux/interrupt.h:11,
                    from include/linux/kernel_stat.h:9,
                    from mm/memory.c:42:
   arch/arm64/include/asm/page.h:31: note: macro "alloc_zeroed_user_highpage_movable" defined here
      31 | #define alloc_zeroed_user_highpage_movable(movableflags, vma, vaddr) \
         | 
   mm/memory.c:3589:9: error: 'alloc_zeroed_user_highpage_movable' undeclared (first use in this function)
    3589 |  page = alloc_zeroed_user_highpage_movable(vma, vmf->address);
         |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


vim +/alloc_zeroed_user_highpage_movable +2892 mm/memory.c

4e047f89777122 Shachar Raindel    2015-04-14  2860  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2861  /*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2862   * Handle the case of a page which we actually need to copy to a new page.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2863   *
c1e8d7c6a7a682 Michel Lespinasse  2020-06-08  2864   * Called with mmap_lock locked and the old page referenced, but
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2865   * without the ptl held.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2866   *
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2867   * High level logic flow:
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2868   *
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2869   * - Allocate a page, copy the content of the old page to the new one.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2870   * - Handle book keeping and accounting - cgroups, mmu-notifiers, etc.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2871   * - Take the PTL. If the pte changed, bail out and release the allocated page
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2872   * - If the pte is still the way we remember it, update the page table and all
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2873   *   relevant references. This includes dropping the reference the page-table
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2874   *   held to the old page, as well as updating the rmap.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2875   * - In any case, unlock the PTL and drop the reference we took to the old page.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2876   */
2b7403035459c7 Souptick Joarder   2018-08-23  2877  static vm_fault_t wp_page_copy(struct vm_fault *vmf)
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2878  {
82b0f8c39a3869 Jan Kara           2016-12-14  2879  	struct vm_area_struct *vma = vmf->vma;
bae473a423f65e Kirill A. Shutemov 2016-07-26  2880  	struct mm_struct *mm = vma->vm_mm;
a41b70d6dfc28b Jan Kara           2016-12-14  2881  	struct page *old_page = vmf->page;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2882  	struct page *new_page = NULL;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2883  	pte_t entry;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2884  	int page_copied = 0;
ac46d4f3c43241 Jérôme Glisse      2018-12-28  2885  	struct mmu_notifier_range range;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2886  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2887  	if (unlikely(anon_vma_prepare(vma)))
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2888  		goto oom;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2889  
2994302bc8a171 Jan Kara           2016-12-14  2890  	if (is_zero_pfn(pte_pfn(vmf->orig_pte))) {
82b0f8c39a3869 Jan Kara           2016-12-14 @2891  		new_page = alloc_zeroed_user_highpage_movable(vma,
82b0f8c39a3869 Jan Kara           2016-12-14 @2892  							      vmf->address);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2893  		if (!new_page)
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2894  			goto oom;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2895  	} else {
bae473a423f65e Kirill A. Shutemov 2016-07-26  2896  		new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma,
82b0f8c39a3869 Jan Kara           2016-12-14  2897  				vmf->address);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2898  		if (!new_page)
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2899  			goto oom;
83d116c53058d5 Jia He             2019-10-11  2900  
83d116c53058d5 Jia He             2019-10-11  2901  		if (!cow_user_page(new_page, old_page, vmf)) {
83d116c53058d5 Jia He             2019-10-11  2902  			/*
83d116c53058d5 Jia He             2019-10-11  2903  			 * COW failed, if the fault was solved by other,
83d116c53058d5 Jia He             2019-10-11  2904  			 * it's fine. If not, userspace would re-fault on
83d116c53058d5 Jia He             2019-10-11  2905  			 * the same address and we will handle the fault
83d116c53058d5 Jia He             2019-10-11  2906  			 * from the second attempt.
83d116c53058d5 Jia He             2019-10-11  2907  			 */
83d116c53058d5 Jia He             2019-10-11  2908  			put_page(new_page);
83d116c53058d5 Jia He             2019-10-11  2909  			if (old_page)
83d116c53058d5 Jia He             2019-10-11  2910  				put_page(old_page);
83d116c53058d5 Jia He             2019-10-11  2911  			return 0;
83d116c53058d5 Jia He             2019-10-11  2912  		}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2913  	}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2914  
d9eb1ea2bf8734 Johannes Weiner    2020-06-03  2915  	if (mem_cgroup_charge(new_page, mm, GFP_KERNEL))
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2916  		goto oom_free_new;
9d82c69438d0df Johannes Weiner    2020-06-03  2917  	cgroup_throttle_swaprate(new_page, GFP_KERNEL);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2918  
eb3c24f305e56c Mel Gorman         2015-06-24  2919  	__SetPageUptodate(new_page);
eb3c24f305e56c Mel Gorman         2015-06-24  2920  
7269f999934b28 Jérôme Glisse      2019-05-13  2921  	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm,
6f4f13e8d9e27c Jérôme Glisse      2019-05-13  2922  				vmf->address & PAGE_MASK,
ac46d4f3c43241 Jérôme Glisse      2018-12-28  2923  				(vmf->address & PAGE_MASK) + PAGE_SIZE);
ac46d4f3c43241 Jérôme Glisse      2018-12-28  2924  	mmu_notifier_invalidate_range_start(&range);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2925  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2926  	/*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2927  	 * Re-check the pte - we dropped the lock
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2928  	 */
82b0f8c39a3869 Jan Kara           2016-12-14  2929  	vmf->pte = pte_offset_map_lock(mm, vmf->pmd, vmf->address, &vmf->ptl);
2994302bc8a171 Jan Kara           2016-12-14  2930  	if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2931  		if (old_page) {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2932  			if (!PageAnon(old_page)) {
eca56ff906bdd0 Jerome Marchand    2016-01-14  2933  				dec_mm_counter_fast(mm,
eca56ff906bdd0 Jerome Marchand    2016-01-14  2934  						mm_counter_file(old_page));
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2935  				inc_mm_counter_fast(mm, MM_ANONPAGES);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2936  			}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2937  		} else {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2938  			inc_mm_counter_fast(mm, MM_ANONPAGES);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2939  		}
2994302bc8a171 Jan Kara           2016-12-14  2940  		flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2941  		entry = mk_pte(new_page, vma->vm_page_prot);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2942  		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
111fe7186b29d1 Nicholas Piggin    2020-12-29  2943  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2944  		/*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2945  		 * Clear the pte entry and flush it first, before updating the
111fe7186b29d1 Nicholas Piggin    2020-12-29  2946  		 * pte with the new entry, to keep TLBs on different CPUs in
111fe7186b29d1 Nicholas Piggin    2020-12-29  2947  		 * sync. This code used to set the new PTE then flush TLBs, but
111fe7186b29d1 Nicholas Piggin    2020-12-29  2948  		 * that left a window where the new PTE could be loaded into
111fe7186b29d1 Nicholas Piggin    2020-12-29  2949  		 * some TLBs while the old PTE remains in others.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2950  		 */
82b0f8c39a3869 Jan Kara           2016-12-14  2951  		ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
82b0f8c39a3869 Jan Kara           2016-12-14  2952  		page_add_new_anon_rmap(new_page, vma, vmf->address, false);
b518154e59aab3 Joonsoo Kim        2020-08-11  2953  		lru_cache_add_inactive_or_unevictable(new_page, vma);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2954  		/*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2955  		 * We call the notify macro here because, when using secondary
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2956  		 * mmu page tables (such as kvm shadow page tables), we want the
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2957  		 * new page to be mapped directly into the secondary page table.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2958  		 */
82b0f8c39a3869 Jan Kara           2016-12-14  2959  		set_pte_at_notify(mm, vmf->address, vmf->pte, entry);
82b0f8c39a3869 Jan Kara           2016-12-14  2960  		update_mmu_cache(vma, vmf->address, vmf->pte);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2961  		if (old_page) {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2962  			/*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2963  			 * Only after switching the pte to the new page may
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2964  			 * we remove the mapcount here. Otherwise another
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2965  			 * process may come and find the rmap count decremented
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2966  			 * before the pte is switched to the new page, and
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2967  			 * "reuse" the old page writing into it while our pte
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2968  			 * here still points into it and can be read by other
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2969  			 * threads.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2970  			 *
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2971  			 * The critical issue is to order this
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2972  			 * page_remove_rmap with the ptp_clear_flush above.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2973  			 * Those stores are ordered by (if nothing else,)
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2974  			 * the barrier present in the atomic_add_negative
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2975  			 * in page_remove_rmap.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2976  			 *
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2977  			 * Then the TLB flush in ptep_clear_flush ensures that
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2978  			 * no process can access the old page before the
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2979  			 * decremented mapcount is visible. And the old page
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2980  			 * cannot be reused until after the decremented
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2981  			 * mapcount is visible. So transitively, TLBs to
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2982  			 * old page will be flushed before it can be reused.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2983  			 */
d281ee61451835 Kirill A. Shutemov 2016-01-15  2984  			page_remove_rmap(old_page, false);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2985  		}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2986  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2987  		/* Free the old page.. */
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2988  		new_page = old_page;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2989  		page_copied = 1;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2990  	} else {
7df676974359f9 Bibo Mao           2020-05-27  2991  		update_mmu_tlb(vma, vmf->address, vmf->pte);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2992  	}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2993  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2994  	if (new_page)
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  2995  		put_page(new_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2996  
82b0f8c39a3869 Jan Kara           2016-12-14  2997  	pte_unmap_unlock(vmf->pte, vmf->ptl);
4645b9fe84bf48 Jérôme Glisse      2017-11-15  2998  	/*
4645b9fe84bf48 Jérôme Glisse      2017-11-15  2999  	 * No need to double call mmu_notifier->invalidate_range() callback as
4645b9fe84bf48 Jérôme Glisse      2017-11-15  3000  	 * the above ptep_clear_flush_notify() did already call it.
4645b9fe84bf48 Jérôme Glisse      2017-11-15  3001  	 */
ac46d4f3c43241 Jérôme Glisse      2018-12-28  3002  	mmu_notifier_invalidate_range_only_end(&range);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3003  	if (old_page) {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3004  		/*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3005  		 * Don't let another task, with possibly unlocked vma,
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3006  		 * keep the mlocked page.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3007  		 */
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3008  		if (page_copied && (vma->vm_flags & VM_LOCKED)) {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3009  			lock_page(old_page);	/* LRU manipulation */
e90309c9f7722d Kirill A. Shutemov 2016-01-15  3010  			if (PageMlocked(old_page))
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3011  				munlock_vma_page(old_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3012  			unlock_page(old_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3013  		}
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  3014  		put_page(old_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3015  	}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3016  	return page_copied ? VM_FAULT_WRITE : 0;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3017  oom_free_new:
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  3018  	put_page(new_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3019  oom:
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3020  	if (old_page)
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  3021  		put_page(old_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3022  	return VM_FAULT_OOM;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3023  }
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3024  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@xxxxxxxxxxxx

Attachment: .config.gz
Description: application/gzip


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux