Without that, every call to __msan_poison_alloca() in NMI may end up allocating memory, which is NMI-unsafe. Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Marco Elver <elver@xxxxxxxxxx> Cc: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Link: https://lore.kernel.org/lkml/20221025221755.3810809-1-glider@xxxxxxxxxx/ Signed-off-by: Alexander Potapenko <glider@xxxxxxxxxx> --- mm/kmsan/kmsan.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h index 961eb658020aa..3cd2050a33e6a 100644 --- a/mm/kmsan/kmsan.h +++ b/mm/kmsan/kmsan.h @@ -125,6 +125,8 @@ static __always_inline bool kmsan_in_runtime(void) { if ((hardirq_count() >> HARDIRQ_SHIFT) > 1) return true; + if (in_nmi()) + return true; return kmsan_get_context()->kmsan_in_runtime; } -- 2.38.1.273.g43a17bfeac-goog