Re: + kfence-limit-currently-covered-allocations-when-pool-nearly-full-fix-fix.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 28, 2021 at 08:25PM -0700, akpm@xxxxxxxxxxxxxxxxxxxx wrote:
[...]
> --- a/mm/kfence/core.c~kfence-limit-currently-covered-allocations-when-pool-nearly-full-fix-fix
> +++ a/mm/kfence/core.c
> @@ -172,7 +172,7 @@ static inline bool should_skip_covered(v
>  	return atomic_long_read(&counters[KFENCE_COUNTER_ALLOCATED]) > thresh;
>  }
>  
> -static u32 get_alloc_stack_hash(unsigned long *stack_entries, size_t num_entries)
> +static u32 get_alloc_stack_hash(unsigned long *stack_entries, unsigned long num_entries)
>  {
>  	num_entries = min(num_entries, UNIQUE_ALLOC_STACK_DEPTH);
>  	num_entries = filter_irq_stacks(stack_entries, num_entries);
> @@ -839,7 +839,7 @@ void kfence_shutdown_cache(struct kmem_c
>  void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
>  {
>  	unsigned long stack_entries[KFENCE_STACK_DEPTH];
> -	size_t num_stack_entries;
> +	unsigned long num_stack_entries;
>  	u32 alloc_stack_hash;

Thanks, Andrew!

Apologies for missing this. UNIQUE_ALLOC_STACK_DEPTH was turned into an
UL after seeing the same warning on 64-bit builds... :-/

The below would be simpler and more consistent, because we use size_t
for num_stack_entries elsewhere, too. Whichever you think is more
appropriate.

Thanks,
-- Marco

------ >8 ------

From: Marco Elver <elver@xxxxxxxxxx>
Date: Wed, 29 Sep 2021 11:20:44 +0200
Subject: [PATCH] fixup! kfence: limit currently covered allocations when pool
 nearly full
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Fix 32 bit. size_t is UL on 64-bit only; just cast it to size_t

mm/kfence/core.c: In function ‘get_alloc_stack_hash’:
./include/linux/minmax.h:20:28: warning: comparison of distinct pointer types lacks a cast
   20 |  (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
      |                            ^~
./include/linux/minmax.h:26:4: note: in expansion of macro ‘__typecheck’
   26 |   (__typecheck(x, y) && __no_side_effects(x, y))
      |    ^~~~~~~~~~~
./include/linux/minmax.h:36:24: note: in expansion of macro ‘__safe_cmp’
   36 |  __builtin_choose_expr(__safe_cmp(x, y), \
      |                        ^~~~~~~~~~
./include/linux/minmax.h:45:19: note: in expansion of macro ‘__careful_cmp’
   45 | #define min(x, y) __careful_cmp(x, y, <)
      |                   ^~~~~~~~~~~~~
mm/kfence/core.c:177:16: note: in expansion of macro ‘min’
  177 |  num_entries = min(num_entries, UNIQUE_ALLOC_STACK_DEPTH);
      |                ^~~

Reported-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Marco Elver <elver@xxxxxxxxxx>
---
 mm/kfence/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 1f1fc5be1d4d..802905b1c89b 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -130,7 +130,7 @@ atomic_t kfence_allocation_gate = ATOMIC_INIT(1);
 static atomic_t alloc_covered[ALLOC_COVERED_SIZE];
 
 /* Stack depth used to determine uniqueness of an allocation. */
-#define UNIQUE_ALLOC_STACK_DEPTH 8UL
+#define UNIQUE_ALLOC_STACK_DEPTH ((size_t)8)
 
 /*
  * Randomness for stack hashes, making the same collisions across reboots and
-- 
2.33.0.685.g46640cef36-goog




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux