The patch titled Subject: page_poison: play nicely with KASAN has been added to the -mm tree. Its filename is page_poison-plays-nicely-with-kasan.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/page_poison-plays-nicely-with-kasan.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/page_poison-plays-nicely-with-kasan.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Qian Cai <cai@xxxxxx> Subject: page_poison: play nicely with KASAN KASAN does not play well with the page poisoning (CONFIG_PAGE_POISONING). It triggers false positives in the allocation path, BUG: KASAN: use-after-free in memchr_inv+0x2ea/0x330 Read of size 8 at addr ffff88881f800000 by task swapper/0 CPU: 0 PID: 0 Comm: swapper Not tainted 5.0.0-rc1+ #54 Call Trace: dump_stack+0xe0/0x19a print_address_description.cold.2+0x9/0x28b kasan_report.cold.3+0x7a/0xb5 __asan_report_load8_noabort+0x19/0x20 memchr_inv+0x2ea/0x330 kernel_poison_pages+0x103/0x3d5 get_page_from_freelist+0x15e7/0x4d90 because KASAN has not yet unpoisoned the shadow page for allocation before it checks memchr_inv() but only found a stale poison pattern. Also, false positives in free path, BUG: KASAN: slab-out-of-bounds in kernel_poison_pages+0x29e/0x3d5 Write of size 4096 at addr ffff8888112cc000 by task swapper/0/1 CPU: 5 PID: 1 Comm: swapper/0 Not tainted 5.0.0-rc1+ #55 Call Trace: dump_stack+0xe0/0x19a print_address_description.cold.2+0x9/0x28b kasan_report.cold.3+0x7a/0xb5 check_memory_region+0x22d/0x250 memset+0x28/0x40 kernel_poison_pages+0x29e/0x3d5 __free_pages_ok+0x75f/0x13e0 because KASAN adds poisoned redzones around slab objects, but the page poisoning needs to poison the whole page, so simply unpoision the shadow page before running the page poison's memset. Link: http://lkml.kernel.org/r/20190107223636.80593-1-cai@xxxxxx Signed-off-by: Qian Cai <cai@xxxxxx> Cc: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/mm/page_alloc.c~page_poison-plays-nicely-with-kasan +++ a/mm/page_alloc.c @@ -1945,8 +1945,8 @@ inline void post_alloc_hook(struct page arch_alloc_page(page, order); kernel_map_pages(page, 1 << order, 1); - kernel_poison_pages(page, 1 << order, 1); kasan_alloc_pages(page, order); + kernel_poison_pages(page, 1 << order, 1); set_page_owner(page, order, gfp_flags); } --- a/mm/page_poison.c~page_poison-plays-nicely-with-kasan +++ a/mm/page_poison.c @@ -6,6 +6,7 @@ #include <linux/page_ext.h> #include <linux/poison.h> #include <linux/ratelimit.h> +#include <linux/kasan.h> static bool want_page_poisoning __read_mostly; @@ -40,6 +41,7 @@ static void poison_page(struct page *pag { void *addr = kmap_atomic(page); + kasan_unpoison_shadow(addr, PAGE_SIZE); memset(addr, PAGE_POISON, PAGE_SIZE); kunmap_atomic(addr); } _ Patches currently in -mm which might be from cai@xxxxxx are mm-page_owner-fix-for-deferred-struct-page-init.patch usercopy-no-check-page-span-for-stack-objects.patch page_poison-plays-nicely-with-kasan.patch