On 20 Feb 2025, at 14:26, David Hildenbrand wrote: > On 20.02.25 19:43, Gregory Price wrote: >> On Thu, Feb 20, 2025 at 09:50:07AM -0800, Yang Shi wrote: >>>>> I will double check that this isn't working as expected, and i'll double >>>>> check for a build option as well. >>>>> >>>>> stupid question - it sorta seems like you'd want this as the default >>>>> setting for driver-managed hotplug memory blocks, but I suppose for >>>>> very small blocks there's problems (as described in the docs). >>>> >>>> The issue is that it is per-memblock. So you'll never have 1 GiB ranges >>>> of consecutive usable memory (e.g., 1 GiB hugetlb page). >>> >>> Regardless of ZONE_MOVABLE or ZONE_NORMAL, right? >>> >>> Thanks, >>> Yang >> >> From my testing, yes. > > Yes, the only way to get some 1 GiB pages is by using larger memory blocks (e.g., 2 GiB on x86-64), which comes with a different set of issues (esp. hotplug granularity). An alternative I can think of is to mark a hot-plugged memory block dedicated to memmap and use it for new memory block’s memmap provision. In this way, a 256MB memory block can be used for 256MB*(256MB/4MB)=16GB hot plugged memory. Yes, it will waste memory before 256MB+16GB is online, but that might be easier to handle than variable sized memory block, I suppose? > > Of course, only 1x usable 1 GiB page for each 2 GiB block. > > There were ideas in how to optimize that (e.g., requiring a new sysfs interface to expose variable-sized blocks), if anybody is interested, please reach out. Best Regards, Yan, Zi