On 30/06/2021 08.58, Mel Gorman wrote:
On Tue, Jun 29, 2021 at 06:01:12PM +0200, Jesper Dangaard Brouer wrote:
On 29/06/2021 15.48, Chuck Lever wrote:
The call site in __page_pool_alloc_pages_slow() also seems to be
confused on this matter. It should be attended to by someone who
is familiar with that code.
I don't think we need a fix for __page_pool_alloc_pages_slow(), as the array
is guaranteed to be empty.
But a fix would look like this:
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index c137ce308c27..1b04538a3da3 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -245,22 +245,23 @@ static struct page
*__page_pool_alloc_pages_slow(struct page_pool *pool,
if (unlikely(pp_order))
return __page_pool_alloc_page_order(pool, gfp);
/* Unnecessary as alloc cache is empty, but guarantees zero count */
- if (unlikely(pool->alloc.count > 0))
+ if (unlikely(pool->alloc.count > 0)) // ^^^^^^^^^^^^^^^^^^^^^^
return pool->alloc.cache[--pool->alloc.count];
/* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array
*/
memset(&pool->alloc.cache, 0, sizeof(void *) * bulk);
+ /* bulk API ret value also count existing pages, but array is empty
*/
nr_pages = alloc_pages_bulk_array(gfp, bulk, pool->alloc.cache);
if (unlikely(!nr_pages))
return NULL;
/* Pages have been filled into alloc.cache array, but count is zero
and
* page element have not been (possibly) DMA mapped.
*/
- for (i = 0; i < nr_pages; i++) {
+ for (i = pool->alloc.count; i < nr_pages; i++) {
That last part would break as the loop is updating pool->alloc_count.
The last part "i = pool->alloc.count" probably is a bad idea.
Just setting pool->alloc_count = nr_pages would break if PP_FLAG_DMA_MAP
was set and page_pool_dma_map failed. Right?
Yes, this loop is calling page_pool_dma_map(), and if that fails we
don't advance pool->alloc_count.
--Jesper