Re: [PATCH v1 00/11] mm/kasan: support per-page shadow memory to reduce memory consumption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 15, 2017 at 09:34:17PM -0700, Dmitry Vyukov wrote:
> On Mon, May 15, 2017 at 6:16 PM,  <js1304@xxxxxxxxx> wrote:
> > From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
> >
> > Hello, all.
> >
> > This is an attempt to recude memory consumption of KASAN. Please see
> > following description to get the more information.
> >
> > 1. What is per-page shadow memory
> 
> Hi Joonsoo,

Hello, Dmitry.

> 
> First I need to say that this is great work. I wanted KASAN to consume

Thanks!

> 1/8-th of _kernel_ memory rather than total physical memory for a long
> time.
> 
> However, this implementation does not work inline instrumentation. And
> the inline instrumentation is the main mode for KASAN. Outline
> instrumentation is merely a rudiment to support gcc 4.9, and it needs
> to be removed as soon as we stop caring about gcc 4.9 (do we at all?
> is it the current compiler in any distro? Ubuntu 12 has 4.8, Ubuntu 14
> already has 5.4. And if you build gcc yourself or get a fresher
> compiler from somewhere else, you hopefully get something better than
> 4.9).

Hmm... I don't think that outline instrumentation is something to be
removed. In embedded world, there is a fixed partition table and
enlarging the kernel binary would cause the problem. Changing that
table is possible but is really uncomfortable thing for debugging
something. So, I think that outline instrumentation has it's own merit.

Anyway, I have missed inline instrumentation completely.

I will attach the fix in the bottom. It doesn't look beautiful
since it breaks layer design (some check will be done at report
function). However, I think that it's a good trade-off.

> 
> Here is an example boot+scp log with inline instrumentation:
> https://gist.githubusercontent.com/dvyukov/dfdc8b6972ddd260b201a85d5d5cdb5d/raw/2a032cd5be371c7ad6cad8f14c0a0610e6fa772e/gistfile1.txt
> 
> Joonsoo, can you think of a way to take advantages of your approach,
> but make it work with inline instrumentation?
> 
> Will it work if we map a single zero page for whole shadow initially,
> and then lazily map real shadow pages only for kernel memory, and then
> remap it again to zero pages when the whole KASAN_SHADOW_SCALE_SHIFT
> range of pages becomes unused (similarly to what you do in
> kasan_unmap_shadow())?

Mapping zero page to non-kernel memory could cause true-negative
problem since we cannot flush the TLB in all cpus. We will read zero
shadow value value in this case even if actual shadow value is not
zero. This is one of the reason that black page is introduced in this
patchset.

Thanks.

-------------------->8------------------
>From b2d38de92f2b1c20de6c29682b7a5c29e0f3fe26 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
Date: Tue, 16 May 2017 14:56:27 +0900
Subject: [PATCH] mm/kasan: fix-up CONFIG_KASAN_INLINE

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
---
 mm/kasan/kasan.c  | 13 +++++++++++--
 mm/kasan/kasan.h  |  2 ++
 mm/kasan/report.c |  2 +-
 3 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 76c1c37..fd6b7d4 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -622,7 +622,7 @@ static noinline void check_memory_region_slow(unsigned long addr,
 
 report:
 	preempt_enable();
-	kasan_report(addr, size, write, ret_ip);
+	__kasan_report(addr, size, write, ret_ip);
 }
 
 static __always_inline void check_memory_region_inline(unsigned long addr,
@@ -634,7 +634,7 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
 
 	if (unlikely((void *)addr <
 		kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
-		kasan_report(addr, size, write, ret_ip);
+		__kasan_report(addr, size, write, ret_ip);
 		return;
 	}
 
@@ -692,6 +692,15 @@ void *memcpy(void *dest, const void *src, size_t len)
 	return __memcpy(dest, src, len);
 }
 
+void kasan_report(unsigned long addr, size_t size,
+		bool is_write, unsigned long ip)
+{
+	if (!pshadow_val(addr, size))
+		return;
+
+	check_memory_region_slow(addr, size, is_write, ip);
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page))) {
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index db04087..7a20707 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -108,6 +108,8 @@ static inline bool arch_kasan_recheck_prepare(unsigned long addr,
 static inline bool kasan_pshadow_inited(void) {	return false; }
 #endif
 
+void __kasan_report(unsigned long addr, size_t size,
+		bool is_write, unsigned long ip);
 void kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
 void kasan_report_double_free(struct kmem_cache *cache, void *object,
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 9b47e10..7831d58 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -418,7 +418,7 @@ static inline bool kasan_report_enabled(void)
 	return !test_and_set_bit(KASAN_BIT_REPORTED, &kasan_flags);
 }
 
-void kasan_report(unsigned long addr, size_t size,
+void __kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip)
 {
 	struct kasan_access_info info;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux