Patch "net: factorize code in kmalloc_reserve()" has been added to the 6.1-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    net: factorize code in kmalloc_reserve()

to the 6.1-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     net-factorize-code-in-kmalloc_reserve.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.


>From stable-owner@xxxxxxxxxxxxxxx Fri Sep 15 20:23:47 2023
From: Ajay Kaher <akaher@xxxxxxxxxx>
Date: Fri, 15 Sep 2023 23:51:04 +0530
Subject: net: factorize code in kmalloc_reserve()
To: stable@xxxxxxxxxxxxxxx
Cc: davem@xxxxxxxxxxxxx, edumazet@xxxxxxxxxx, kuba@xxxxxxxxxx, pabeni@xxxxxxxxxx, alexanderduyck@xxxxxx, soheil@xxxxxxxxxx, netdev@xxxxxxxxxxxxxxx, namit@xxxxxxxxxx, amakhalov@xxxxxxxxxx, vsirnapalli@xxxxxxxxxx, er.ajay.kaher@xxxxxxxxx, akaher@xxxxxxxxxx
Message-ID: <1694802065-1821-4-git-send-email-akaher@xxxxxxxxxx>

From: Eric Dumazet <edumazet@xxxxxxxxxx>

commit 5c0e820cbbbe2d1c4cea5cd2bfc1302c123436df upstream.

All kmalloc_reserve() callers have to make the same computation,
we can factorize them, to prepare following patch in the series.

Signed-off-by: Eric Dumazet <edumazet@xxxxxxxxxx>
Acked-by: Soheil Hassas Yeganeh <soheil@xxxxxxxxxx>
Acked-by: Paolo Abeni <pabeni@xxxxxxxxxx>
Reviewed-by: Alexander Duyck <alexanderduyck@xxxxxx>
Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx>
[Ajay: Regenerated the patch for v6.1.y]
Signed-off-by: Ajay Kaher <akaher@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
 net/core/skbuff.c |   27 +++++++++++----------------
 1 file changed, 11 insertions(+), 16 deletions(-)

--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -424,17 +424,20 @@ EXPORT_SYMBOL(napi_build_skb);
  * may be used. Otherwise, the packet data may be discarded until enough
  * memory is free
  */
-static void *kmalloc_reserve(size_t size, gfp_t flags, int node,
+static void *kmalloc_reserve(unsigned int *size, gfp_t flags, int node,
 			     bool *pfmemalloc)
 {
-	void *obj;
 	bool ret_pfmemalloc = false;
+	unsigned int obj_size;
+	void *obj;
 
+	obj_size = SKB_HEAD_ALIGN(*size);
+	*size = obj_size = kmalloc_size_roundup(obj_size);
 	/*
 	 * Try a regular allocation, when that fails and we're not entitled
 	 * to the reserves, fail.
 	 */
-	obj = kmalloc_node_track_caller(size,
+	obj = kmalloc_node_track_caller(obj_size,
 					flags | __GFP_NOMEMALLOC | __GFP_NOWARN,
 					node);
 	if (obj || !(gfp_pfmemalloc_allowed(flags)))
@@ -442,7 +445,7 @@ static void *kmalloc_reserve(size_t size
 
 	/* Try again but now we are using pfmemalloc reserves */
 	ret_pfmemalloc = true;
-	obj = kmalloc_node_track_caller(size, flags, node);
+	obj = kmalloc_node_track_caller(obj_size, flags, node);
 
 out:
 	if (pfmemalloc)
@@ -503,9 +506,7 @@ struct sk_buff *__alloc_skb(unsigned int
 	 * aligned memory blocks, unless SLUB/SLAB debug is enabled.
 	 * Both skb->head and skb_shared_info are cache line aligned.
 	 */
-	size = SKB_HEAD_ALIGN(size);
-	size = kmalloc_size_roundup(size);
-	data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc);
+	data = kmalloc_reserve(&size, gfp_mask, node, &pfmemalloc);
 	if (unlikely(!data))
 		goto nodata;
 	/* kmalloc_size_roundup() might give us more room than requested.
@@ -1832,9 +1833,7 @@ int pskb_expand_head(struct sk_buff *skb
 	if (skb_pfmemalloc(skb))
 		gfp_mask |= __GFP_MEMALLOC;
 
-	size = SKB_HEAD_ALIGN(size);
-	size = kmalloc_size_roundup(size);
-	data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL);
+	data = kmalloc_reserve(&size, gfp_mask, NUMA_NO_NODE, NULL);
 	if (!data)
 		goto nodata;
 	size = SKB_WITH_OVERHEAD(size);
@@ -6198,9 +6197,7 @@ static int pskb_carve_inside_header(stru
 	if (skb_pfmemalloc(skb))
 		gfp_mask |= __GFP_MEMALLOC;
 
-	size = SKB_HEAD_ALIGN(size);
-	size = kmalloc_size_roundup(size);
-	data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL);
+	data = kmalloc_reserve(&size, gfp_mask, NUMA_NO_NODE, NULL);
 	if (!data)
 		return -ENOMEM;
 	size = SKB_WITH_OVERHEAD(size);
@@ -6316,9 +6313,7 @@ static int pskb_carve_inside_nonlinear(s
 	if (skb_pfmemalloc(skb))
 		gfp_mask |= __GFP_MEMALLOC;
 
-	size = SKB_HEAD_ALIGN(size);
-	size = kmalloc_size_roundup(size);
-	data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL);
+	data = kmalloc_reserve(&size, gfp_mask, NUMA_NO_NODE, NULL);
 	if (!data)
 		return -ENOMEM;
 	size = SKB_WITH_OVERHEAD(size);


Patches currently in stable-queue which might be from stable-owner@xxxxxxxxxxxxxxx are

queue-6.1/net-add-skb_head_align-helper.patch
queue-6.1/io_uring-don-t-set-affinity-on-a-dying-sqpoll-thread.patch
queue-6.1/net-remove-osize-variable-in-__alloc_skb.patch
queue-6.1/io_uring-always-lock-in-io_apoll_task_func.patch
queue-6.1/net-factorize-code-in-kmalloc_reserve.patch
queue-6.1/io_uring-revert-io_uring-fix-multishot-accept-ordering.patch
queue-6.1/io_uring-net-don-t-overflow-multishot-accept.patch
queue-6.1/io_uring-sqpoll-fix-io-wq-affinity-when-ioring_setup_sqpoll-is-used.patch
queue-6.1/io_uring-break-out-of-iowq-iopoll-on-teardown.patch
queue-6.1/net-deal-with-integer-overflows-in-kmalloc_reserve.patch



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux