[PATCH v2] x86: disable non-instrumented version of copy_page when KMSAN is enabled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I found that commit afb2d666d025 ("zsmalloc: use copy_page for full page
copy") caused a false-positive KMSAN warning.

  [   50.030627][ T2974] BUG: KMSAN: use-after-free in obj_malloc+0x6cc/0x7b0

  [   50.165956][ T2974] Uninit was stored to memory at:
  [   50.170819][ T2974]  obj_malloc+0x70a/0x7b0

  [   50.328931][ T2974] Uninit was created at:
  [   50.341845][ T2974]  free_unref_page_prepare+0x130/0xfc0

We need to use instrumented version when KMSAN is enabled.
Let arch/x86/include/asm/page_64.h implement copy_page() using memcpy()
like arch/x86/include/asm/page_32.h does.

Signed-off-by: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx>
---
 arch/x86/include/asm/page_64.h | 9 +++++++++
 1 file changed, 9 insertions(+)

Changes in v2:

  Update explanation, for I misinterpreted source/destination direction.

diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index cc6b8e087192..f13bba3a9dab 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -58,7 +58,16 @@ static inline void clear_page(void *page)
 			   : "cc", "memory", "rax", "rcx");
 }
 
+#ifdef CONFIG_KMSAN
+/* Use of non-instrumented assembly version confuses KMSAN. */
+void *memcpy(void *to, const void *from, __kernel_size_t len);
+static inline void copy_page(void *to, void *from)
+{
+	memcpy(to, from, PAGE_SIZE);
+}
+#else
 void copy_page(void *to, void *from);
+#endif
 
 #ifdef CONFIG_X86_5LEVEL
 /*
-- 
2.34.1





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux