On 8/16/21 4:27 PM, Andrew Morton wrote: > Also, pushback... That is welcome. I only have the one specific use case mentioned here. > > On Mon, 16 Aug 2021 15:49:45 -0700 Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote: > >> >> Real world use cases >> -------------------- >> There are groups today using hugetlb pages to back VMs on x86. Their >> use case is as described above. They have experienced the issues with >> performance and not necessarily getting the excepted number smaller huge > > ("number of") thanks, another typo to fix. > >> pages after free/allocate cycle. >> > > It really is a ton of new code. I think we're owed much more detail > about the problem than the above. To be confident that all this > material is truly justified? The desired functionality for this specific use case is to simply convert a 1G huegtlb page to 512 2MB hugetlb pages. As mentioned "Converting larger to smaller hugetlb pages can be accomplished today by first freeing the larger page to the buddy allocator and then allocating the smaller pages. However, there are two issues with this approach: 1) This process can take quite some time, especially if allocation of the smaller pages is not immediate and requires migration/compaction. 2) There is no guarantee that the total size of smaller pages allocated will match the size of the larger page which was freed. This is because the area freed by the larger page could quickly be fragmented." These two issues have been experienced in practice. A big chunk of the code changes (aprox 50%) is for the vmemmap optimizations. This is also the most complex part of the changes. I added the code as interaction with vmemmap reduction was discussed during the RFC. It is only a performance enhancement and honestly may not be worth the cost/risk. I will get some numbers to measure the actual benefit. > > Also, some selftests and benchmarking code in tools/testing/selftests > would be helpful? > Will do. -- Mike Kravetz