On Fri, Oct 2, 2020 at 4:19 PM Marco Elver <elver@xxxxxxxxxx> wrote: > > On Fri, 2 Oct 2020 at 08:48, Jann Horn <jannh@xxxxxxxxxx> wrote: > > > > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver <elver@xxxxxxxxxx> wrote: > > > Add architecture specific implementation details for KFENCE and enable > > > KFENCE for the arm64 architecture. In particular, this implements the > > > required interface in <asm/kfence.h>. Currently, the arm64 version does > > > not yet use a statically allocated memory pool, at the cost of a pointer > > > load for each is_kfence_address(). > > [...] > > > diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h > > [...] > > > +static inline bool arch_kfence_initialize_pool(void) > > > +{ > > > + const unsigned int num_pages = ilog2(roundup_pow_of_two(KFENCE_POOL_SIZE / PAGE_SIZE)); > > > + struct page *pages = alloc_pages(GFP_KERNEL, num_pages); > > > + > > > + if (!pages) > > > + return false; > > > + > > > + __kfence_pool = page_address(pages); > > > + return true; > > > +} > > > > If you're going to do "virt_to_page(meta->addr)->slab_cache = cache;" > > on these pages in kfence_guarded_alloc(), and pass them into kfree(), > > you'd better mark these pages as non-compound - something like > > alloc_pages_exact() or split_page() may help. Otherwise, I think when > > SLUB's kfree() does virt_to_head_page() right at the start, that will > > return a pointer to the first page of the entire __kfence_pool, and > > then when it loads page->slab_cache, it gets some random cache and > > stuff blows up. Kinda surprising that you haven't run into that during > > your testing, maybe I'm missing something... > > I added a WARN_ON() check in kfence_initialize_pool() to check if our > pages are compound or not; they are not. > > In slub.c, __GFP_COMP is passed to alloc_pages(), which causes them to > have a compound head I believe. Aah, I mixed up high-order pages and compound pages. Sorry for the noise.