On Wed, Dec 09, 2020 at 08:07:04AM +0530, Anshuman Khandual wrote: > >> + if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) { > >> + rc = -ERANGE; > >> + goto out_resource; > >> + } > >> + ... > >> +struct range arch_get_mappable_range(void) > >> +{ > >> + struct range memhp_range; > >> + > >> + memhp_range.start = 0; > >> + memhp_range.end = VMEM_MAX_PHYS; > >> + return memhp_range; > >> +} > >> + > >> int arch_add_memory(int nid, u64 start, u64 size, > >> struct mhp_params *params) > >> { > >> @@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size, > >> if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot)) > >> return -EINVAL; > >> > >> + VM_BUG_ON(!memhp_range_allowed(start, size, 1)); > >> rc = vmem_add_mapping(start, size); > >> if (rc) > > Is there a reason why you added the memhp_range_allowed() check call > > to arch_add_memory() instead of vmem_add_mapping()? If you would do > > As I had mentioned previously, memhp_range_allowed() is available with > CONFIG_MEMORY_HOTPLUG but vmem_add_mapping() is always available. Hence > there will be a build failure in vmem_add_mapping() for the range check > memhp_range_allowed() without memory hotplug enabled. > > > that, then the extra code in __segment_load() wouldn't be > > required. > > Even though the error message from memhp_range_allowed() might be > > highly confusing. > > Alternatively leaving __segment_load() and vmem_add_memory() unchanged > will create three range checks i.e two memhp_range_allowed() and the > existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug > paths, which is not optimal. Ah, sorry. I didn't follow this discussion too closely. I just thought my point of view would be clear: let's not have two different ways to check for the same thing which must be kept in sync. Therefore I was wondering why this next version is still doing that. Please find a way to solve this.