On Wed, Nov 11, 2020 at 8:27 PM Lorenzo Stoakes <lstoakes@xxxxxxxxx> wrote: > > On Wed, 11 Nov 2020 at 17:44, Andrey Konovalov <andreyknvl@xxxxxxxxxx> wrote: > > I'll try to reproduce this and figure out the issue. Thanks for letting us know! > > I hope you don't mind me diving in here, I was taking a look just now > and managed to reproduce this locally - I bisected the issue to > 105397399 ("kasan: simplify kasan_poison_kfree"). > > If I stick a simple check in as below it fixes the issue, so I'm > guessing something is violating the assumptions in 105397399? > > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 7a94cebc0324..16163159a017 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -387,6 +387,11 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip) > struct page *page; > > page = virt_to_head_page(ptr); > + > + if (!PageSlab(page)) { > + return; > + } > + > ____kasan_slab_free(page->slab_cache, ptr, ip, false); > } Ah, by the looks of it, ceph's init_caches() functions asks for kmalloc-backed mempool, but at the same time provides a size that doesn't fit into any kmalloc cache, and kmalloc falls back onto page_alloc. Hard to say whether this is an issue in ceph, but I guess we'll have to make KASAN fool proof either way and keep the PageSlab() check in kasan_slab_free_mempool(). Thank you for debugging this, Lorenzo. I'll fix this in v10.