On Tue, 29 Sep 2020 at 16:24, Mark Rutland <mark.rutland@xxxxxxx> wrote: [...] > > From other sub-threads it sounds like these addresses are not part of > the linear/direct map. Having kmalloc return addresses outside of the > linear map is going to break anything that relies on virt<->phys > conversions, and is liable to make DMA corrupt memory. There were > problems of that sort with VMAP_STACK, and this is why kvmalloc() is > separate from kmalloc(). > > Have you tested with CONFIG_DEBUG_VIRTUAL? I'd expect that to scream. > > I strongly suspect this isn't going to be safe unless you always use an > in-place carevout from the linear map (which could be the linear alias > of a static carevout). That's an excellent point, thank you! Indeed, on arm64, a version with naive static-pool screams with CONFIG_DEBUG_VIRTUAL. We'll try to put together an arm64 version using a carveout as you suggest. > [...] > > > +static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) > > +{ > > + return static_branch_unlikely(&kfence_allocation_key) ? __kfence_alloc(s, size, flags) : > > + NULL; > > +} > > Minor (unrelated) nit, but this would be easier to read as: > > static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) > { > if (static_branch_unlikely(&kfence_allocation_key)) > return __kfence_alloc(s, size, flags); > return NULL; > } Will fix for v5. Thanks, -- Marco