Patch "bpf: Propagate error from htab_lock_bucket() to userspace" has been added to the 6.0-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    bpf: Propagate error from htab_lock_bucket() to userspace

to the 6.0-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     bpf-propagate-error-from-htab_lock_bucket-to-userspa.patch
and it can be found in the queue-6.0 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 09c8286c845367e43a26a1b54fec7d07bb4054a5
Author: Hou Tao <houtao1@xxxxxxxxxx>
Date:   Wed Aug 31 12:26:28 2022 +0800

    bpf: Propagate error from htab_lock_bucket() to userspace
    
    [ Upstream commit 66a7a92e4d0d091e79148a4c6ec15d1da65f4280 ]
    
    In __htab_map_lookup_and_delete_batch() if htab_lock_bucket() returns
    -EBUSY, it will go to next bucket. Going to next bucket may not only
    skip the elements in current bucket silently, but also incur
    out-of-bound memory access or expose kernel memory to userspace if
    current bucket_cnt is greater than bucket_size or zero.
    
    Fixing it by stopping batch operation and returning -EBUSY when
    htab_lock_bucket() fails, and the application can retry or skip the busy
    batch as needed.
    
    Fixes: 20b6cc34ea74 ("bpf: Avoid hashtab deadlock with map_locked")
    Reported-by: Hao Sun <sunhao.th@xxxxxxxxx>
    Signed-off-by: Hou Tao <houtao1@xxxxxxxxxx>
    Link: https://lore.kernel.org/r/20220831042629.130006-3-houtao@xxxxxxxxxxxxxxx
    Signed-off-by: Martin KaFai Lau <martin.lau@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index ad09da139589..75f77df910dc 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -1704,8 +1704,11 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
 	/* do not grab the lock unless need it (bucket_cnt > 0). */
 	if (locked) {
 		ret = htab_lock_bucket(htab, b, batch, &flags);
-		if (ret)
-			goto next_batch;
+		if (ret) {
+			rcu_read_unlock();
+			bpf_enable_instrumentation();
+			goto after_loop;
+		}
 	}
 
 	bucket_cnt = 0;



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux