+ lib-stackdepot-always-do-filter_irq_stacks-in-stack_depot_save.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: lib/stackdepot: always do filter_irq_stacks() in stack_depot_save()
has been added to the -mm tree.  Its filename is
     lib-stackdepot-always-do-filter_irq_stacks-in-stack_depot_save.patch

This patch should soon appear at
    https://ozlabs.org/~akpm/mmots/broken-out/lib-stackdepot-always-do-filter_irq_stacks-in-stack_depot_save.patch
and later at
    https://ozlabs.org/~akpm/mmotm/broken-out/lib-stackdepot-always-do-filter_irq_stacks-in-stack_depot_save.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Marco Elver <elver@xxxxxxxxxx>
Subject: lib/stackdepot: always do filter_irq_stacks() in stack_depot_save()

The non-interrupt portion of interrupt stack traces before interrupt entry
is usually arbitrary.  Therefore, saving stack traces of interrupts (that
include entries before interrupt entry) to stack depot leads to unbounded
stackdepot growth.

As such, use of filter_irq_stacks() is a requirement to ensure stackdepot
can efficiently deduplicate interrupt stacks.

Looking through all current users of stack_depot_save(), none (except
KASAN) pass the stack trace through filter_irq_stacks() before passing it
on to stack_depot_save().

Rather than adding filter_irq_stacks() to all current users of
stack_depot_save(), it became clear that stack_depot_save() should simply
do filter_irq_stacks().

Link: https://lkml.kernel.org/r/20211130095727.2378739-1-elver@xxxxxxxxxx
Signed-off-by: Marco Elver <elver@xxxxxxxxxx>
Reviewed-by: Alexander Potapenko <glider@xxxxxxxxxx>
Acked-by: Vlastimil Babka <vbabka@xxxxxxx>
Reviewed-by: Andrey Konovalov <andreyknvl@xxxxxxxxx>
Cc: Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx>
Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx>
Cc: Vijayanand Jitta <vjitta@xxxxxxxxxxxxxx>
Cc: "Gustavo A. R. Silva" <gustavoars@xxxxxxxxxx>
Cc: Imran Khan <imran.f.khan@xxxxxxxxxx>
Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Jani Nikula <jani.nikula@xxxxxxxxx>
Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 lib/stackdepot.c  |   13 +++++++++++++
 mm/kasan/common.c |    1 -
 2 files changed, 13 insertions(+), 1 deletion(-)

--- a/lib/stackdepot.c~lib-stackdepot-always-do-filter_irq_stacks-in-stack_depot_save
+++ a/lib/stackdepot.c
@@ -328,6 +328,9 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch);
  * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids
  * any allocations and will fail if no space is left to store the stack trace.
  *
+ * If the stack trace in @entries is from an interrupt, only the portion up to
+ * interrupt entry is saved.
+ *
  * Context: Any context, but setting @can_alloc to %false is required if
  *          alloc_pages() cannot be used from the current context. Currently
  *          this is the case from contexts where neither %GFP_ATOMIC nor
@@ -346,6 +349,16 @@ depot_stack_handle_t __stack_depot_save(
 	unsigned long flags;
 	u32 hash;
 
+	/*
+	 * If this stack trace is from an interrupt, including anything before
+	 * interrupt entry usually leads to unbounded stackdepot growth.
+	 *
+	 * Because use of filter_irq_stacks() is a requirement to ensure
+	 * stackdepot can efficiently deduplicate interrupt stacks, always
+	 * filter_irq_stacks() to simplify all callers' use of stackdepot.
+	 */
+	nr_entries = filter_irq_stacks(entries, nr_entries);
+
 	if (unlikely(nr_entries == 0) || stack_depot_disable)
 		goto fast_exit;
 
--- a/mm/kasan/common.c~lib-stackdepot-always-do-filter_irq_stacks-in-stack_depot_save
+++ a/mm/kasan/common.c
@@ -36,7 +36,6 @@ depot_stack_handle_t kasan_save_stack(gf
 	unsigned int nr_entries;
 
 	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
-	nr_entries = filter_irq_stacks(entries, nr_entries);
 	return __stack_depot_save(entries, nr_entries, flags, can_alloc);
 }
 
_

Patches currently in -mm which might be from elver@xxxxxxxxxx are

mm-slab_common-use-warn-if-cache-still-has-objects-on-destroy.patch
kasan-test-add-globals-left-out-of-bounds-test.patch
kasan-add-ability-to-detect-double-kmem_cache_destroy.patch
kasan-test-add-test-case-for-double-kmem_cache_destroy.patch
panic-use-error_report_end-tracepoint-on-warnings.patch
lib-stackdepot-always-do-filter_irq_stacks-in-stack_depot_save.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux