This is a note to let you know that I've just added the patch titled SUNRPC: Revert 5f7fc5d69f6e92ec0b38774c387f5cf7812c5806 to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: sunrpc-revert-5f7fc5d69f6e92ec0b38774c387f5cf7812c58.patch and it can be found in the queue-6.6 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 1f7f52ecb95d59c2a9f314a77c4936177a5791bb Author: Chuck Lever <chuck.lever@xxxxxxxxxx> Date: Mon Dec 18 17:05:40 2023 -0500 SUNRPC: Revert 5f7fc5d69f6e92ec0b38774c387f5cf7812c5806 [ Upstream commit bd018b98ba84ca0c80abac1ef23ce726a809e58c ] Guillaume says: > I believe commit 5f7fc5d69f6e ("SUNRPC: Resupply rq_pages from > node-local memory") in Linux 6.5+ is incorrect. It passes > unconditionally rq_pool->sp_id as the NUMA node. > > While the comment in the svc_pool declaration in sunrpc/svc.h says > that sp_id is also the NUMA node id, it might not be the case if > the svc is created using svc_create_pooled(). svc_created_pooled() > can use the per-cpu pool mode therefore in this case sp_id would > be the cpu id. Fix this by reverting now. At a later point this minor optimization, and the deceptive labeling of the sp_id field, can be revisited. Reported-by: Guillaume Morin <guillaume@xxxxxxxxxxx> Closes: https://lore.kernel.org/linux-nfs/ZYC9rsno8qYggVt9@xxxxxxxxxxxxxxxxxx/T/#u Fixes: 5f7fc5d69f6e ("SUNRPC: Resupply rq_pages from node-local memory") Signed-off-by: Chuck Lever <chuck.lever@xxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 4cfe9640df481..5cfe5c7408b74 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -666,9 +666,8 @@ static bool svc_alloc_arg(struct svc_rqst *rqstp) } for (filled = 0; filled < pages; filled = ret) { - ret = alloc_pages_bulk_array_node(GFP_KERNEL, - rqstp->rq_pool->sp_id, - pages, rqstp->rq_pages); + ret = alloc_pages_bulk_array(GFP_KERNEL, pages, + rqstp->rq_pages); if (ret > filled) /* Made progress, don't sleep yet */ continue;