On Wed, Apr 01, 2020 at 07:25:53PM -0700, Andrew Morton wrote: > On Wed, 11 Mar 2020 15:09:20 -0700 Roman Gushchin <guro@xxxxxx> wrote: > > > At large scale rebooting servers in order to allocate gigantic hugepages > > is quite expensive and complex. At the same time keeping some constant > > percentage of memory in reserved hugepages even if the workload isn't > > using it is a big waste: not all workloads can benefit from using 1 GB > > pages. > > > > The following solution can solve the problem: > > 1) On boot time a dedicated cma area* is reserved. The size is passed > > as a kernel argument. > > 2) Run-time allocations of gigantic hugepages are performed using the > > cma allocator and the dedicated cma area > > > > In this case gigantic hugepages can be allocated successfully with a > > high probability, however the memory isn't completely wasted if nobody > > is using 1GB hugepages: it can be used for pagecache, anon memory, > > THPs, etc. > > > > * On a multi-node machine a per-node cma area is allocated on each node. > > Following gigantic hugetlb allocation are using the first available > > numa node if the mask isn't specified by a user. > > > > Usage: > > 1) configure the kernel to allocate a cma area for hugetlb allocations: > > pass hugetlb_cma=10G as a kernel argument > > > > 2) allocate hugetlb pages as usual, e.g. > > echo 10 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages > > > > If the option isn't enabled or the allocation of the cma area failed, > > the current behavior of the system is preserved. > > > > x86 and arm-64 are covered by this patch, other architectures can be > > trivially added later. > > Lots of review input on v2, but then everyone went quiet ;) > > Has everything been addressed? I hope so. There is a nice cleanup from Aslan, which can be merged in or treated as a separate patch. If someone else has any concerns, I'm happy to address them too. Thanks!