Patch "xsk: fix batch alloc API on non-coherent systems" has been added to the 6.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    xsk: fix batch alloc API on non-coherent systems

to the 6.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     xsk-fix-batch-alloc-api-on-non-coherent-systems.patch
and it can be found in the queue-6.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 26b1f5f954dc101213dd96765f44decb61d233cc
Author: Maciej Fijalkowski <maciej.fijalkowski@xxxxxxxxx>
Date:   Wed Sep 11 21:10:19 2024 +0200

    xsk: fix batch alloc API on non-coherent systems
    
    [ Upstream commit 4144a1059b47e821c82c3c82eb23a4c7312dce3a ]
    
    In cases when synchronizing DMA operations is necessary,
    xsk_buff_alloc_batch() returns a single buffer instead of the requested
    count. This puts the pressure on drivers that use batch API as they have
    to check for this corner case on their side and take care of allocations
    by themselves, which feels counter productive. Let us improve the core
    by looping over xp_alloc() @max times when slow path needs to be taken.
    
    Another issue with current interface, as spotted and fixed by Dries, was
    that when driver called xsk_buff_alloc_batch() with @max == 0, for slow
    path case it still allocated and returned a single buffer, which should
    not happen. By introducing the logic from first paragraph we kill two
    birds with one stone and address this problem as well.
    
    Fixes: 47e4075df300 ("xsk: Batched buffer allocation for the pool")
    Reported-and-tested-by: Dries De Winter <ddewinter@xxxxxxxxxxxxx>
    Co-developed-by: Dries De Winter <ddewinter@xxxxxxxxxxxxx>
    Signed-off-by: Dries De Winter <ddewinter@xxxxxxxxxxxxx>
    Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@xxxxxxxxx>
    Acked-by: Magnus Karlsson <magnus.karlsson@xxxxxxxxx>
    Acked-by: Alexei Starovoitov <ast@xxxxxxxxxx>
    Link: https://patch.msgid.link/20240911191019.296480-1-maciej.fijalkowski@xxxxxxxxx
    Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index c0e0204b96304..b0f24ebd05f0b 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -623,20 +623,31 @@ static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u3
 	return nb_entries;
 }
 
-u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
+static u32 xp_alloc_slow(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
+			 u32 max)
 {
-	u32 nb_entries1 = 0, nb_entries2;
+	int i;
 
-	if (unlikely(pool->dev && dma_dev_need_sync(pool->dev))) {
+	for (i = 0; i < max; i++) {
 		struct xdp_buff *buff;
 
-		/* Slow path */
 		buff = xp_alloc(pool);
-		if (buff)
-			*xdp = buff;
-		return !!buff;
+		if (unlikely(!buff))
+			return i;
+		*xdp = buff;
+		xdp++;
 	}
 
+	return max;
+}
+
+u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
+{
+	u32 nb_entries1 = 0, nb_entries2;
+
+	if (unlikely(pool->dev && dma_dev_need_sync(pool->dev)))
+		return xp_alloc_slow(pool, xdp, max);
+
 	if (unlikely(pool->free_list_cnt)) {
 		nb_entries1 = xp_alloc_reused(pool, xdp, max);
 		if (nb_entries1 == max)




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux