On 26.09.19 09:43, Michal Hocko wrote: > On Thu 26-09-19 09:12:50, David Hildenbrand wrote: >> On 26.09.19 03:34, Alastair D'Silva wrote: >>> From: Alastair D'Silva <alastair@xxxxxxxxxxx> >>> >>> On PowerPC, the address ranges allocated to OpenCAPI LPC memory >>> are allocated from firmware. These address ranges may be higher >>> than what older kernels permit, as we increased the maximum >>> permissable address in commit 4ffe713b7587 >>> ("powerpc/mm: Increase the max addressable memory to 2PB"). It is >>> possible that the addressable range may change again in the >>> future. >>> >>> In this scenario, we end up with a bogus section returned from >>> __section_nr (see the discussion on the thread "mm: Trigger bug on >>> if a section is not found in __section_nr"). >>> >>> Adding a check here means that we fail early and have an >>> opportunity to handle the error gracefully, rather than rumbling >>> on and potentially accessing an incorrect section. >>> >>> Further discussion is also on the thread ("powerpc: Perform a bounds >>> check in arch_add_memory") >>> http://lkml.kernel.org/r/20190827052047.31547-1-alastair@xxxxxxxxxxx >>> >>> Signed-off-by: Alastair D'Silva <alastair@xxxxxxxxxxx> >>> --- >>> mm/memory_hotplug.c | 20 ++++++++++++++++++++ >>> 1 file changed, 20 insertions(+) >>> >>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >>> index c73f09913165..212804c0f7f5 100644 >>> --- a/mm/memory_hotplug.c >>> +++ b/mm/memory_hotplug.c >>> @@ -278,6 +278,22 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, >>> return 0; >>> } >>> >>> +static int check_hotplug_memory_addressable(unsigned long pfn, >>> + unsigned long nr_pages) >>> +{ >>> + unsigned long max_addr = ((pfn + nr_pages) << PAGE_SHIFT) - 1; >>> + >>> + if (max_addr >> MAX_PHYSMEM_BITS) { >>> + WARN(1, >>> + "Hotplugged memory exceeds maximum addressable address, range=%#lx-%#lx, maximum=%#lx\n", >>> + pfn << PAGE_SHIFT, max_addr, >>> + (1ul << (MAX_PHYSMEM_BITS + 1)) - 1); >>> + return -E2BIG; >>> + } >>> + >>> + return 0; >>> +} >>> + >>> /* >>> * Reasonably generic function for adding memory. It is >>> * expected that archs that support memory hotplug will >>> @@ -291,6 +307,10 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, >>> unsigned long nr, start_sec, end_sec; >>> struct vmem_altmap *altmap = restrictions->altmap; >>> >>> + err = check_hotplug_memory_addressable(pfn, nr_pages); >>> + if (err) >>> + return err; >>> + >>> if (altmap) { >>> /* >>> * Validate altmap is within bounds of the total request >>> >> >> >> I know Michal suggested this, but I still prefer checking early instead >> of when we're knees-deep into adding of memory. > > What is your concern here? Unwinding the state should be pretty > straightfoward from this failure path. Just the general "check what you can check early without locks" approach. But yeah, this series is probably not worth a v5, so I can live with this change just fine :) -- Thanks, David / dhildenb