Patch "bpf/verifier: Use kmalloc_size_roundup() to match ksize() usage" has been added to the 6.1-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    bpf/verifier: Use kmalloc_size_roundup() to match ksize() usage

to the 6.1-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     bpf-verifier-use-kmalloc_size_roundup-to-match-ksize.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit cc07b11c3ecaeba84c9419fb6d57b6b81f075ad0
Author: Kees Cook <keescook@xxxxxxxxxxxx>
Date:   Fri Nov 18 10:34:14 2022 -0800

    bpf/verifier: Use kmalloc_size_roundup() to match ksize() usage
    
    [ Upstream commit ceb35b666d42c2e91b1f94aeca95bb5eb0943268 ]
    
    Most allocation sites in the kernel want an explicitly sized allocation
    (and not "more"), and that dynamic runtime analysis tools (e.g. KASAN,
    UBSAN_BOUNDS, FORTIFY_SOURCE, etc) are looking for precise bounds checking
    (i.e. not something that is rounded up). A tiny handful of allocations
    were doing an implicit alloc/realloc loop that actually depended on
    ksize(), and didn't actually always call realloc. This has created a
    long series of bugs and problems over many years related to the runtime
    bounds checking, so these callers are finally being adjusted to _not_
    depend on the ksize() side-effect, by doing one of several things:
    
    - tracking the allocation size precisely and just never calling ksize()
      at all [1].
    
    - always calling realloc and not using ksize() at all. (This solution
      ends up actually be a subset of the next solution.)
    
    - using kmalloc_size_roundup() to explicitly round up the desired
      allocation size immediately [2].
    
    The bpf/verifier case is this another of this latter case, and is the
    last outstanding case to be fixed in the kernel.
    
    Because some of the dynamic bounds checking depends on the size being an
    _argument_ to an allocator function (i.e. see the __alloc_size attribute),
    the ksize() users are rare, and it could waste local variables, it
    was been deemed better to explicitly separate the rounding up from the
    allocation itself [3].
    
    Round up allocations with kmalloc_size_roundup() so that the verifier's
    use of ksize() is always accurate.
    
    [1] e.g.:
        https://git.kernel.org/linus/712f210a457d
        https://git.kernel.org/linus/72c08d9f4c72
    
    [2] e.g.:
        https://git.kernel.org/netdev/net-next/c/12d6c1d3a2ad
        https://git.kernel.org/netdev/net-next/c/ab3f7828c979
        https://git.kernel.org/netdev/net-next/c/d6dd508080a3
    
    [3] https://lore.kernel.org/lkml/0ea1fc165a6c6117f982f4f135093e69cb884930.camel@xxxxxxxxxx/
    
    Signed-off-by: Kees Cook <keescook@xxxxxxxxxxxx>
    Signed-off-by: Daniel Borkmann <daniel@xxxxxxxxxxxxx>
    Acked-by: Stanislav Fomichev <sdf@xxxxxxxxxx>
    Link: https://lore.kernel.org/bpf/20221118183409.give.387-kees@xxxxxxxxxx
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4e098999610e..242fe307032f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1008,9 +1008,9 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
 	if (unlikely(check_mul_overflow(n, size, &bytes)))
 		return NULL;
 
-	if (ksize(dst) < bytes) {
+	if (ksize(dst) < ksize(src)) {
 		kfree(dst);
-		dst = kmalloc_track_caller(bytes, flags);
+		dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags);
 		if (!dst)
 			return NULL;
 	}
@@ -1027,12 +1027,14 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
  */
 static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
 {
+	size_t alloc_size;
 	void *new_arr;
 
 	if (!new_n || old_n == new_n)
 		goto out;
 
-	new_arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
+	alloc_size = kmalloc_size_roundup(size_mul(new_n, size));
+	new_arr = krealloc(arr, alloc_size, GFP_KERNEL);
 	if (!new_arr) {
 		kfree(arr);
 		return NULL;
@@ -2504,9 +2506,11 @@ static int push_jmp_history(struct bpf_verifier_env *env,
 {
 	u32 cnt = cur->jmp_history_cnt;
 	struct bpf_idx_pair *p;
+	size_t alloc_size;
 
 	cnt++;
-	p = krealloc(cur->jmp_history, cnt * sizeof(*p), GFP_USER);
+	alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
+	p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
 	if (!p)
 		return -ENOMEM;
 	p[cnt - 1].idx = env->insn_idx;



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux