On Thu, 15 Oct 2020 at 15:39, Mark Rutland <mark.rutland@xxxxxxx> wrote: > On Wed, Oct 14, 2020 at 09:12:37PM +0200, Marco Elver wrote: > > On Thu, 8 Oct 2020 at 12:45, Mark Rutland <mark.rutland@xxxxxxx> wrote: > > > On Thu, Oct 08, 2020 at 11:40:52AM +0200, Marco Elver wrote: > > > > On Thu, 1 Oct 2020 at 19:58, Mark Rutland <mark.rutland@xxxxxxx> wrote: > > > > > > > > If you need virt_to_page() to work, the address has to be part of the > > > > > > > linear/direct map. > > > > > We're going with dynamically allocating the pool (for both x86 and > > > > arm64), > > [...] > > > We've got most of this sorted now for v5 -- thank you! > > > > The only thing we're wondering now, is if there are any corner cases > > with using memblock_alloc'd memory for the KFENCE pool? (We'd like to > > avoid page alloc's MAX_ORDER limit.) We have a version that passes > > tests on x86 and arm64, but checking just in case. :-) > > AFAICT otherwise the only noticeable difference might be PageSlab(), if > that's clear for KFENCE allocated pages? A few helpers appear to check > that to determine how something was allocated (e.g. in the scatterlist > and hwpoison code), and I suspect that needs to behave the same. We had to take care of setting PageSlab before, too. We do this during kfence_init(). > Otherwise, I *think* using memblock_alloc should be fine on arm64; I'm > not entirely sure for x86 (but suspect it's similar). On arm64: > > * All memory is given a struct page via memblocks_present() adding all > memory memblocks. This includes memory allocated by memblock_alloc(). > > * All memory is mapped into the linear map via arm64's map_mem() adding > all (non-nomap) memory memblocks. This includes memory allocated by > memblock_alloc(). Very good, thank you. We'll send v5 with these changes rebased on 5.10-rc1 (in ~2 weeks). Thanks, -- Marco