From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Majority of access in the kernel is an access to slab objects. In current implementation, we checks two types of shadow memory in this case and it causes performance regression. kernel build (2048 MB QEMU) Base vs per-page 219 sec vs 238 sec Although current per-page shadow implementation is easy to understand in terms of concept, this performance regression is too bad so this patch changes the check order from per-page and then per-byte shadow to per-byte and then per-page shadow. This change would increases chance of stale TLB problem since mapping for per-byte shadow isn't fully synchronized and we will try to access all the region on this shadow memory. But, it doesn't hurt the correctness so there is no problem on this new implementation. Following is the result of this patch. kernel build (2048 MB QEMU) base vs per-page vs this patch 219 sec vs 238 sec vs 222 sec Performance is restored. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> --- mm/kasan/kasan.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c index e5612be..76c1c37 100644 --- a/mm/kasan/kasan.c +++ b/mm/kasan/kasan.c @@ -587,14 +587,6 @@ static __always_inline u8 pshadow_val(unsigned long addr, size_t size) static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size) { - u8 shadow_val = pshadow_val(addr, size); - - if (!shadow_val) - return false; - - if (shadow_val != KASAN_PER_PAGE_BYPASS) - return true; - if (__builtin_constant_p(size)) { switch (size) { case 1: @@ -649,6 +641,9 @@ static __always_inline void check_memory_region_inline(unsigned long addr, if (likely(!memory_is_poisoned(addr, size))) return; + if (!pshadow_val(addr, size)) + return; + check_memory_region_slow(addr, size, write, ret_ip); } -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>