This is a note to let you know that I've just added the patch titled net: add SKB_HEAD_ALIGN() helper to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: net-add-skb_head_align-helper.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From stable-owner@xxxxxxxxxxxxxxx Fri Sep 15 20:23:16 2023 From: Ajay Kaher <akaher@xxxxxxxxxx> Date: Fri, 15 Sep 2023 23:51:02 +0530 Subject: net: add SKB_HEAD_ALIGN() helper To: stable@xxxxxxxxxxxxxxx Cc: davem@xxxxxxxxxxxxx, edumazet@xxxxxxxxxx, kuba@xxxxxxxxxx, pabeni@xxxxxxxxxx, alexanderduyck@xxxxxx, soheil@xxxxxxxxxx, netdev@xxxxxxxxxxxxxxx, namit@xxxxxxxxxx, amakhalov@xxxxxxxxxx, vsirnapalli@xxxxxxxxxx, er.ajay.kaher@xxxxxxxxx, akaher@xxxxxxxxxx Message-ID: <1694802065-1821-2-git-send-email-akaher@xxxxxxxxxx> From: Eric Dumazet <edumazet@xxxxxxxxxx> commit 115f1a5c42bdad9a9ea356fc0b4a39ec7537947f upstream. We have many places using this expression: SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) Use of SKB_HEAD_ALIGN() will allow to clean them. Signed-off-by: Eric Dumazet <edumazet@xxxxxxxxxx> Acked-by: Soheil Hassas Yeganeh <soheil@xxxxxxxxxx> Acked-by: Paolo Abeni <pabeni@xxxxxxxxxx> Reviewed-by: Alexander Duyck <alexanderduyck@xxxxxx> Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx> [Ajay: Regenerated the patch for v6.1.y] Signed-off-by: Ajay Kaher <akaher@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- include/linux/skbuff.h | 8 ++++++++ net/core/skbuff.c | 18 ++++++------------ 2 files changed, 14 insertions(+), 12 deletions(-) --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -261,6 +261,14 @@ #define SKB_DATA_ALIGN(X) ALIGN(X, SMP_CACHE_BYTES) #define SKB_WITH_OVERHEAD(X) \ ((X) - SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) + +/* For X bytes available in skb->head, what is the minimal + * allocation needed, knowing struct skb_shared_info needs + * to be aligned. + */ +#define SKB_HEAD_ALIGN(X) (SKB_DATA_ALIGN(X) + \ + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) + #define SKB_MAX_ORDER(X, ORDER) \ SKB_WITH_OVERHEAD((PAGE_SIZE << (ORDER)) - (X)) #define SKB_MAX_HEAD(X) (SKB_MAX_ORDER((X), 0)) --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -504,8 +504,7 @@ struct sk_buff *__alloc_skb(unsigned int * aligned memory blocks, unless SLUB/SLAB debug is enabled. * Both skb->head and skb_shared_info are cache line aligned. */ - size = SKB_DATA_ALIGN(size); - size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + size = SKB_HEAD_ALIGN(size); osize = kmalloc_size_roundup(size); data = kmalloc_reserve(osize, gfp_mask, node, &pfmemalloc); if (unlikely(!data)) @@ -578,8 +577,7 @@ struct sk_buff *__netdev_alloc_skb(struc goto skb_success; } - len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - len = SKB_DATA_ALIGN(len); + len = SKB_HEAD_ALIGN(len); if (sk_memalloc_socks()) gfp_mask |= __GFP_MEMALLOC; @@ -678,8 +676,7 @@ struct sk_buff *__napi_alloc_skb(struct data = page_frag_alloc_1k(&nc->page_small, gfp_mask); pfmemalloc = NAPI_SMALL_PAGE_PFMEMALLOC(nc->page_small); } else { - len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - len = SKB_DATA_ALIGN(len); + len = SKB_HEAD_ALIGN(len); data = page_frag_alloc(&nc->page, len, gfp_mask); pfmemalloc = nc->page.pfmemalloc; @@ -1837,8 +1834,7 @@ int pskb_expand_head(struct sk_buff *skb if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - size = SKB_DATA_ALIGN(size); - size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + size = SKB_HEAD_ALIGN(size); size = kmalloc_size_roundup(size); data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL); if (!data) @@ -6204,8 +6200,7 @@ static int pskb_carve_inside_header(stru if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - size = SKB_DATA_ALIGN(size); - size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + size = SKB_HEAD_ALIGN(size); size = kmalloc_size_roundup(size); data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL); if (!data) @@ -6323,8 +6318,7 @@ static int pskb_carve_inside_nonlinear(s if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - size = SKB_DATA_ALIGN(size); - size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + size = SKB_HEAD_ALIGN(size); size = kmalloc_size_roundup(size); data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL); if (!data) Patches currently in stable-queue which might be from stable-owner@xxxxxxxxxxxxxxx are queue-6.1/net-add-skb_head_align-helper.patch queue-6.1/io_uring-don-t-set-affinity-on-a-dying-sqpoll-thread.patch queue-6.1/net-remove-osize-variable-in-__alloc_skb.patch queue-6.1/io_uring-always-lock-in-io_apoll_task_func.patch queue-6.1/net-factorize-code-in-kmalloc_reserve.patch queue-6.1/io_uring-revert-io_uring-fix-multishot-accept-ordering.patch queue-6.1/io_uring-net-don-t-overflow-multishot-accept.patch queue-6.1/io_uring-sqpoll-fix-io-wq-affinity-when-ioring_setup_sqpoll-is-used.patch queue-6.1/io_uring-break-out-of-iowq-iopoll-on-teardown.patch queue-6.1/net-deal-with-integer-overflows-in-kmalloc_reserve.patch