[folded-merged] kfence-limit-currently-covered-allocations-when-pool-nearly-full-fix-fix.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: fixup! kfence: limit currently covered allocations when pool nearly full
has been removed from the -mm tree.  Its filename was
     kfence-limit-currently-covered-allocations-when-pool-nearly-full-fix-fix.patch

This patch was dropped because it was folded into kfence-limit-currently-covered-allocations-when-pool-nearly-full.patch

------------------------------------------------------
From: Marco Elver <elver@xxxxxxxxxx>
Subject: fixup! kfence: limit currently covered allocations when pool nearly full

Fix 32 bit. size_t is UL on 64-bit only; just cast it to size_t

mm/kfence/core.c: In function `get_alloc_stack_hash':
./include/linux/minmax.h:20:28: warning: comparison of distinct pointer types lacks a cast
   20 |  (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
      |                            ^~
./include/linux/minmax.h:26:4: note: in expansion of macro `__typecheck'
   26 |   (__typecheck(x, y) && __no_side_effects(x, y))
      |    ^~~~~~~~~~~
./include/linux/minmax.h:36:24: note: in expansion of macro `__safe_cmp'
   36 |  __builtin_choose_expr(__safe_cmp(x, y), \
      |                        ^~~~~~~~~~
./include/linux/minmax.h:45:19: note: in expansion of macro `__careful_cmp'
   45 | #define min(x, y) __careful_cmp(x, y, <)
      |                   ^~~~~~~~~~~~~
mm/kfence/core.c:177:16: note: in expansion of macro `min'
  177 |  num_entries = min(num_entries, UNIQUE_ALLOC_STACK_DEPTH);
      |                ^~~

Link: https://lkml.kernel.org/r/YVQ0fE4Yil2EX8FI@xxxxxxxxxxxxxxxx
Signed-off-by: Marco Elver <elver@xxxxxxxxxx>
Reported-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/kfence/core.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/kfence/core.c~kfence-limit-currently-covered-allocations-when-pool-nearly-full-fix-fix
+++ a/mm/kfence/core.c
@@ -130,7 +130,7 @@ atomic_t kfence_allocation_gate = ATOMIC
 static atomic_t alloc_covered[ALLOC_COVERED_SIZE];
 
 /* Stack depth used to determine uniqueness of an allocation. */
-#define UNIQUE_ALLOC_STACK_DEPTH 8UL
+#define UNIQUE_ALLOC_STACK_DEPTH ((size_t)8)
 
 /*
  * Randomness for stack hashes, making the same collisions across reboots and
_

Patches currently in -mm which might be from elver@xxxxxxxxxx are

lib-stackdepot-include-gfph.patch
lib-stackdepot-remove-unused-function-argument.patch
lib-stackdepot-introduce-__stack_depot_save.patch
kasan-common-provide-can_alloc-in-kasan_save_stack.patch
kasan-generic-introduce-kasan_record_aux_stack_noalloc.patch
workqueue-kasan-avoid-alloc_pages-when-recording-stack.patch
mm-fix-data-race-in-pagepoisoned.patch
stacktrace-move-filter_irq_stacks-to-kernel-stacktracec.patch
kfence-count-unexpectedly-skipped-allocations.patch
kfence-move-saving-stack-trace-of-allocations-into-__kfence_alloc.patch
kfence-limit-currently-covered-allocations-when-pool-nearly-full.patch
kfence-add-note-to-documentation-about-skipping-covered-allocations.patch
kfence-test-use-kunit_skip-to-skip-tests.patch
kfence-shorten-critical-sections-of-alloc-free.patch
kfence-always-use-static-branches-to-guard-kfence_alloc.patch
kfence-default-to-dynamic-branch-instead-of-static-keys-mode.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux