Re: [PATCH 2/2] mm: gup: do not call try_grab_folio() in slow path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 6/4/24 7:57 PM, kernel test robot wrote:
Hi Yang,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Yang-Shi/mm-gup-do-not-call-try_grab_folio-in-slow-path/20240605-075027
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20240604234858.948986-2-yang%40os.amperecomputing.com
patch subject: [PATCH 2/2] mm: gup: do not call try_grab_folio() in slow path
config: openrisc-allnoconfig (https://download.01.org/0day-ci/archive/20240605/202406051039.9m00gwIx-lkp@xxxxxxxxx/config)
compiler: or1k-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240605/202406051039.9m00gwIx-lkp@xxxxxxxxx/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@xxxxxxxxx>
| Closes: https://lore.kernel.org/oe-kbuild-all/202406051039.9m00gwIx-lkp@xxxxxxxxx/

All warnings (new ones prefixed by >>):

mm/gup.c:131:22: warning: 'try_grab_folio_fast' defined but not used [-Wunused-function]

Thanks for reporting the problem. It seems try_grab_folio_fast() definition should be protected by CONFIG_HAVE_FAST_GUP, will fix it in v2.

      131 | static struct folio *try_grab_folio_fast(struct page *page, int refs,
          |                      ^~~~~~~~~~~~~~~~~~~


vim +/try_grab_folio_fast +131 mm/gup.c

    101	
    102	/**
    103	 * try_grab_folio_fast() - Attempt to get or pin a folio in fast path.
    104	 * @page:  pointer to page to be grabbed
    105	 * @refs:  the value to (effectively) add to the folio's refcount
    106	 * @flags: gup flags: these are the FOLL_* flag values.
    107	 *
    108	 * "grab" names in this file mean, "look at flags to decide whether to use
    109	 * FOLL_PIN or FOLL_GET behavior, when incrementing the folio's refcount.
    110	 *
    111	 * Either FOLL_PIN or FOLL_GET (or neither) must be set, but not both at the
    112	 * same time. (That's true throughout the get_user_pages*() and
    113	 * pin_user_pages*() APIs.) Cases:
    114	 *
    115	 *    FOLL_GET: folio's refcount will be incremented by @refs.
    116	 *
    117	 *    FOLL_PIN on large folios: folio's refcount will be incremented by
    118	 *    @refs, and its pincount will be incremented by @refs.
    119	 *
    120	 *    FOLL_PIN on single-page folios: folio's refcount will be incremented by
    121	 *    @refs * GUP_PIN_COUNTING_BIAS.
    122	 *
    123	 * Return: The folio containing @page (with refcount appropriately
    124	 * incremented) for success, or NULL upon failure. If neither FOLL_GET
    125	 * nor FOLL_PIN was set, that's considered failure, and furthermore,
    126	 * a likely bug in the caller, so a warning is also emitted.
    127	 *
    128	 * It uses add ref unless zero to elevate the folio refcount and must be called
    129	 * in fast path only.
    130	 */
  > 131	static struct folio *try_grab_folio_fast(struct page *page, int refs,
    132						 unsigned int flags)
    133	{
    134		struct folio *folio;
    135	
    136		/* Raise warn if it is not called in fast GUP */
    137		VM_WARN_ON_ONCE(!irqs_disabled());
    138	
    139		if (WARN_ON_ONCE((flags & (FOLL_GET | FOLL_PIN)) == 0))
    140			return NULL;
    141	
    142		if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page)))
    143			return NULL;
    144	
    145		if (flags & FOLL_GET)
    146			return try_get_folio(page, refs);
    147	
    148		/* FOLL_PIN is set */
    149	
    150		/*
    151		 * Don't take a pin on the zero page - it's not going anywhere
    152		 * and it is used in a *lot* of places.
    153		 */
    154		if (is_zero_page(page))
    155			return page_folio(page);
    156	
    157		folio = try_get_folio(page, refs);
    158		if (!folio)
    159			return NULL;
    160	
    161		/*
    162		 * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a
    163		 * right zone, so fail and let the caller fall back to the slow
    164		 * path.
    165		 */
    166		if (unlikely((flags & FOLL_LONGTERM) &&
    167			     !folio_is_longterm_pinnable(folio))) {
    168			if (!put_devmap_managed_folio_refs(folio, refs))
    169				folio_put_refs(folio, refs);
    170			return NULL;
    171		}
    172	
    173		/*
    174		 * When pinning a large folio, use an exact count to track it.
    175		 *
    176		 * However, be sure to *also* increment the normal folio
    177		 * refcount field at least once, so that the folio really
    178		 * is pinned.  That's why the refcount from the earlier
    179		 * try_get_folio() is left intact.
    180		 */
    181		if (folio_test_large(folio))
    182			atomic_add(refs, &folio->_pincount);
    183		else
    184			folio_ref_add(folio,
    185					refs * (GUP_PIN_COUNTING_BIAS - 1));
    186		/*
    187		 * Adjust the pincount before re-checking the PTE for changes.
    188		 * This is essentially a smp_mb() and is paired with a memory
    189		 * barrier in folio_try_share_anon_rmap_*().
    190		 */
    191		smp_mb__after_atomic();
    192	
    193		node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, refs);
    194	
    195		return folio;
    196	}
    197	






[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux