On Tue, Sep 29, 2020 at 4:28 PM Mark Rutland <mark.rutland@xxxxxxx> wrote: > > On Mon, Sep 21, 2020 at 03:26:04PM +0200, Marco Elver wrote: > > Add architecture specific implementation details for KFENCE and enable > > KFENCE for the arm64 architecture. In particular, this implements the > > required interface in <asm/kfence.h>. Currently, the arm64 version does > > not yet use a statically allocated memory pool, at the cost of a pointer > > load for each is_kfence_address(). > > > > Reviewed-by: Dmitry Vyukov <dvyukov@xxxxxxxxxx> > > Co-developed-by: Alexander Potapenko <glider@xxxxxxxxxx> > > Signed-off-by: Alexander Potapenko <glider@xxxxxxxxxx> > > Signed-off-by: Marco Elver <elver@xxxxxxxxxx> > > --- > > For ARM64, we would like to solicit feedback on what the best option is > > to obtain a constant address for __kfence_pool. One option is to declare > > a memory range in the memory layout to be dedicated to KFENCE (like is > > done for KASAN), however, it is unclear if this is the best available > > option. We would like to avoid touching the memory layout. > > --- > > arch/arm64/Kconfig | 1 + > > arch/arm64/include/asm/kfence.h | 39 +++++++++++++++++++++++++++++++++ > > arch/arm64/mm/fault.c | 4 ++++ > > 3 files changed, 44 insertions(+) > > create mode 100644 arch/arm64/include/asm/kfence.h > > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > > index 6d232837cbee..1acc6b2877c3 100644 > > --- a/arch/arm64/Kconfig > > +++ b/arch/arm64/Kconfig > > @@ -132,6 +132,7 @@ config ARM64 > > select HAVE_ARCH_JUMP_LABEL_RELATIVE > > select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) > > select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN > > + select HAVE_ARCH_KFENCE if (!ARM64_16K_PAGES && !ARM64_64K_PAGES) > > select HAVE_ARCH_KGDB > > select HAVE_ARCH_MMAP_RND_BITS > > select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT > > diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h > > new file mode 100644 > > index 000000000000..608dde80e5ca > > --- /dev/null > > +++ b/arch/arm64/include/asm/kfence.h > > @@ -0,0 +1,39 @@ > > +/* SPDX-License-Identifier: GPL-2.0 */ > > + > > +#ifndef __ASM_KFENCE_H > > +#define __ASM_KFENCE_H > > + > > +#include <linux/kfence.h> > > +#include <linux/log2.h> > > +#include <linux/mm.h> > > + > > +#include <asm/cacheflush.h> > > + > > +#define KFENCE_SKIP_ARCH_FAULT_HANDLER "el1_sync" > > + > > +/* > > + * FIXME: Support HAVE_ARCH_KFENCE_STATIC_POOL: Use the statically allocated > > + * __kfence_pool, to avoid the extra pointer load for is_kfence_address(). By > > + * default, however, we do not have struct pages for static allocations. > > + */ > > + > > +static inline bool arch_kfence_initialize_pool(void) > > +{ > > + const unsigned int num_pages = ilog2(roundup_pow_of_two(KFENCE_POOL_SIZE / PAGE_SIZE)); > > + struct page *pages = alloc_pages(GFP_KERNEL, num_pages); > > + > > + if (!pages) > > + return false; > > + > > + __kfence_pool = page_address(pages); > > + return true; > > +} > > + > > +static inline bool kfence_protect_page(unsigned long addr, bool protect) > > +{ > > + set_memory_valid(addr, 1, !protect); > > + > > + return true; > > +} > > This is only safe if the linear map is force ot page granularity. That's > the default with rodata=full, but this is not always the case, so this > will need some interaction with the MMU setup in arch/arm64/mm/mmu.c. On x86 we ensure this by reallocating the necessary page tables. But looks like your suggestion is what we need for arm64 as long as we also want virt_to_page() to work for our pool. It's still unclear to me whether a carveout you suggest can be placed at a fixed (known at link time) address, as the main point of this dance is to remove memory accesses from is_kfence_addr(). > Thanks, > Mark. -- Alexander Potapenko Software Engineer Google Germany GmbH Erika-Mann-Straße, 33 80636 München Geschäftsführer: Paul Manicle, Halimah DeLaine Prado Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg