[akpm-mm:mm-unstable 74/199] mm/mempolicy.c:2223: warning: expecting prototype for alloc_pages_mpol_noprof(). Prototype was for alloc_pages_mpol() instead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



tree:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-unstable
head:   4e567abb6482f6228d23491a25b0d343350e51fe
commit: e1759b2193c7893c152134bfe4dd59cb4765d58c [74/199] mm: enable page allocation tagging
config: sparc-allmodconfig (https://download.01.org/0day-ci/archive/20240328/202403280323.SEPBf4pi-lkp@xxxxxxxxx/config)
compiler: sparc64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240328/202403280323.SEPBf4pi-lkp@xxxxxxxxx/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@xxxxxxxxx>
| Closes: https://lore.kernel.org/oe-kbuild-all/202403280323.SEPBf4pi-lkp@xxxxxxxxx/

All warnings (new ones prefixed by >>):

>> mm/mempolicy.c:2223: warning: expecting prototype for alloc_pages_mpol_noprof(). Prototype was for alloc_pages_mpol() instead
>> mm/mempolicy.c:2298: warning: expecting prototype for vma_alloc_folio_noprof(). Prototype was for vma_alloc_folio() instead
>> mm/mempolicy.c:2326: warning: expecting prototype for alloc_pages_noprof(). Prototype was for alloc_pages() instead


vim +2223 mm/mempolicy.c

4c54d94908e089 Feng Tang               2021-09-02  2210  
^1da177e4c3f41 Linus Torvalds          2005-04-16  2211  /**
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2212   * alloc_pages_mpol_noprof - Allocate pages according to NUMA mempolicy.
eb350739605107 Matthew Wilcox (Oracle  2021-04-29  2213)  * @gfp: GFP flags.
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2214   * @order: Order of the page allocation.
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2215   * @pol: Pointer to the NUMA mempolicy.
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2216   * @ilx: Index for interleave mempolicy (also distinguishes alloc_pages()).
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2217   * @nid: Preferred node (usually numa_node_id() but @mpol may override it).
eb350739605107 Matthew Wilcox (Oracle  2021-04-29  2218)  *
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2219   * Return: The page on success or NULL if allocation fails.
^1da177e4c3f41 Linus Torvalds          2005-04-16  2220   */
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2221  struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2222  		struct mempolicy *pol, pgoff_t ilx, int nid)
^1da177e4c3f41 Linus Torvalds          2005-04-16 @2223  {
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2224  	nodemask_t *nodemask;
adf88aa8ea7ff1 Matthew Wilcox (Oracle  2022-05-12  2225) 	struct page *page;
adf88aa8ea7ff1 Matthew Wilcox (Oracle  2022-05-12  2226) 
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2227  	nodemask = policy_nodemask(gfp, pol, ilx, &nid);
4c54d94908e089 Feng Tang               2021-09-02  2228  
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2229  	if (pol->mode == MPOL_PREFERRED_MANY)
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2230  		return alloc_pages_preferred_many(gfp, order, nid, nodemask);
19deb7695e072d David Rientjes          2019-09-04  2231  
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2232  	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2233  	    /* filter "hugepage" allocation, unless from alloc_pages() */
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2234  	    order == HPAGE_PMD_ORDER && ilx != NO_INTERLEAVE_INDEX) {
19deb7695e072d David Rientjes          2019-09-04  2235  		/*
19deb7695e072d David Rientjes          2019-09-04  2236  		 * For hugepage allocation and non-interleave policy which
19deb7695e072d David Rientjes          2019-09-04  2237  		 * allows the current node (or other explicitly preferred
19deb7695e072d David Rientjes          2019-09-04  2238  		 * node) we only try to allocate from the current/preferred
19deb7695e072d David Rientjes          2019-09-04  2239  		 * node and don't fall back to other nodes, as the cost of
19deb7695e072d David Rientjes          2019-09-04  2240  		 * remote accesses would likely offset THP benefits.
19deb7695e072d David Rientjes          2019-09-04  2241  		 *
b27abaccf8e8b0 Dave Hansen             2021-09-02  2242  		 * If the policy is interleave or does not allow the current
19deb7695e072d David Rientjes          2019-09-04  2243  		 * node in its nodemask, we allocate the standard way.
19deb7695e072d David Rientjes          2019-09-04  2244  		 */
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2245  		if (pol->mode != MPOL_INTERLEAVE &&
fa3bea4e1f8202 Gregory Price           2024-02-02  2246  		    pol->mode != MPOL_WEIGHTED_INTERLEAVE &&
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2247  		    (!nodemask || node_isset(nid, *nodemask))) {
cc638f329ef605 Vlastimil Babka         2020-01-13  2248  			/*
cc638f329ef605 Vlastimil Babka         2020-01-13  2249  			 * First, try to allocate THP only on local node, but
cc638f329ef605 Vlastimil Babka         2020-01-13  2250  			 * don't reclaim unnecessarily, just compact.
cc638f329ef605 Vlastimil Babka         2020-01-13  2251  			 */
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2252  			page = __alloc_pages_node_noprof(nid,
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2253  				gfp | __GFP_THISNODE | __GFP_NORETRY, order);
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2254  			if (page || !(gfp & __GFP_DIRECT_RECLAIM))
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2255  				return page;
76e654cc91bbe6 David Rientjes          2019-09-04  2256  			/*
76e654cc91bbe6 David Rientjes          2019-09-04  2257  			 * If hugepage allocations are configured to always
76e654cc91bbe6 David Rientjes          2019-09-04  2258  			 * synchronous compact or the vma has been madvised
76e654cc91bbe6 David Rientjes          2019-09-04  2259  			 * to prefer hugepage backing, retry allowing remote
cc638f329ef605 Vlastimil Babka         2020-01-13  2260  			 * memory with both reclaim and compact as well.
76e654cc91bbe6 David Rientjes          2019-09-04  2261  			 */
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2262  		}
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2263  	}
76e654cc91bbe6 David Rientjes          2019-09-04  2264  
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2265  	page = __alloc_pages_noprof(gfp, order, nid, nodemask);
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2266  
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2267  	if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) {
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2268  		/* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2269  		if (static_branch_likely(&vm_numa_stat_key) &&
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2270  		    page_to_nid(page) == nid) {
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2271  			preempt_disable();
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2272  			__count_numa_event(page_zone(page), NUMA_INTERLEAVE_HIT);
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2273  			preempt_enable();
19deb7695e072d David Rientjes          2019-09-04  2274  		}
356ff8a9a78fb3 David Rientjes          2018-12-07  2275  	}
356ff8a9a78fb3 David Rientjes          2018-12-07  2276  
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2277  	return page;
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2278  }
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2279  
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2280  /**
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2281   * vma_alloc_folio_noprof - Allocate a folio for a VMA.
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2282   * @gfp: GFP flags.
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2283   * @order: Order of the folio.
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2284   * @vma: Pointer to VMA.
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2285   * @addr: Virtual address of the allocation.  Must be inside @vma.
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2286   * @hugepage: Unused (was: For hugepages try only preferred node if possible).
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2287   *
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2288   * Allocate a folio for a specific address in @vma, using the appropriate
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2289   * NUMA policy.  The caller must hold the mmap_lock of the mm_struct of the
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2290   * VMA to prevent it from going away.  Should be used for all allocations
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2291   * for folios that will be mapped into user space, excepting hugetlbfs, and
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2292   * excepting where direct use of alloc_pages_mpol() is more appropriate.
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2293   *
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2294   * Return: The folio on success or NULL if allocation fails.
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2295   */
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2296  struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct *vma,
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2297  		unsigned long addr, bool hugepage)
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19 @2298  {
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2299  	struct mempolicy *pol;
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2300  	pgoff_t ilx;
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2301  	struct page *page;
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2302  
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2303  	pol = get_vma_policy(vma, addr, order, &ilx);
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2304  	page = alloc_pages_mpol_noprof(gfp | __GFP_COMP, order,
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2305  				       pol, ilx, numa_node_id());
d51e9894d27492 Vlastimil Babka         2017-01-24  2306  	mpol_cond_put(pol);
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2307  	return page_rmappable_folio(page);
f584b68005ac78 Matthew Wilcox (Oracle  2022-04-04  2308) }
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2309  EXPORT_SYMBOL(vma_alloc_folio_noprof);
f584b68005ac78 Matthew Wilcox (Oracle  2022-04-04  2310) 
^1da177e4c3f41 Linus Torvalds          2005-04-16  2311  /**
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2312   * alloc_pages_noprof - Allocate pages.
6421ec764a62c5 Matthew Wilcox (Oracle  2021-04-29  2313)  * @gfp: GFP flags.
6421ec764a62c5 Matthew Wilcox (Oracle  2021-04-29  2314)  * @order: Power of two of number of pages to allocate.
^1da177e4c3f41 Linus Torvalds          2005-04-16  2315   *
6421ec764a62c5 Matthew Wilcox (Oracle  2021-04-29  2316)  * Allocate 1 << @order contiguous pages.  The physical address of the
6421ec764a62c5 Matthew Wilcox (Oracle  2021-04-29  2317)  * first page is naturally aligned (eg an order-3 allocation will be aligned
6421ec764a62c5 Matthew Wilcox (Oracle  2021-04-29  2318)  * to a multiple of 8 * PAGE_SIZE bytes).  The NUMA policy of the current
6421ec764a62c5 Matthew Wilcox (Oracle  2021-04-29  2319)  * process is honoured when in process context.
^1da177e4c3f41 Linus Torvalds          2005-04-16  2320   *
6421ec764a62c5 Matthew Wilcox (Oracle  2021-04-29  2321)  * Context: Can be called from any context, providing the appropriate GFP
6421ec764a62c5 Matthew Wilcox (Oracle  2021-04-29  2322)  * flags are used.
6421ec764a62c5 Matthew Wilcox (Oracle  2021-04-29  2323)  * Return: The page on success or NULL if allocation fails.
^1da177e4c3f41 Linus Torvalds          2005-04-16  2324   */
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2325  struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order)
^1da177e4c3f41 Linus Torvalds          2005-04-16 @2326  {
8d90274b3b118c Oleg Nesterov           2014-10-09  2327  	struct mempolicy *pol = &default_policy;
52cd3b074050dd Lee Schermerhorn        2008-04-28  2328  
52cd3b074050dd Lee Schermerhorn        2008-04-28  2329  	/*
52cd3b074050dd Lee Schermerhorn        2008-04-28  2330  	 * No reference counting needed for current->mempolicy
52cd3b074050dd Lee Schermerhorn        2008-04-28  2331  	 * nor system default_policy
52cd3b074050dd Lee Schermerhorn        2008-04-28  2332  	 */
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2333  	if (!in_interrupt() && !(gfp & __GFP_THISNODE))
ddc1a5cbc05dc6 Hugh Dickins            2023-10-19  2334  		pol = get_task_policy(current);
cc9a6c8776615f Mel Gorman              2012-03-21  2335  
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2336  	return alloc_pages_mpol_noprof(gfp, order, pol, NO_INTERLEAVE_INDEX,
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2337  				       numa_node_id());
^1da177e4c3f41 Linus Torvalds          2005-04-16  2338  }
e1759b2193c789 Suren Baghdasaryan      2024-03-21  2339  EXPORT_SYMBOL(alloc_pages_noprof);
^1da177e4c3f41 Linus Torvalds          2005-04-16  2340  

:::::: The code at line 2223 was first introduced by commit
:::::: 1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 Linux-2.6.12-rc2

:::::: TO: Linus Torvalds <torvalds@xxxxxxxxxxxxxxx>
:::::: CC: Linus Torvalds <torvalds@xxxxxxxxxxxxxxx>

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux