On Wed, Jun 15, 2022, Sean Christopherson wrote: > On Thu, Apr 28, 2022, Manali Shukla wrote: > > +void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, bool nested_mmu) > > { > > u64 max = (u64)len + (u64)start; > > u64 phys = start; > > > > - while (phys + LARGE_PAGE_SIZE <= max) { > > - install_large_page(cr3, phys, (void *)(ulong)phys); > > - phys += LARGE_PAGE_SIZE; > > - } > > - install_pages(cr3, phys, max - phys, (void *)(ulong)phys); > > + if (nested_mmu == false) { > > + while (phys + LARGE_PAGE_SIZE <= max) { > > + install_large_page(cr3, phys, (void *)(ulong)phys); > > + phys += LARGE_PAGE_SIZE; > > + } > > + install_pages(cr3, phys, max - phys, (void *)(ulong)phys); > > + } else { > > + set_pte_opt_mask(); > > + install_pages(cr3, phys, len, (void *)(ulong)phys); > > + reset_pte_opt_mask(); > > + } > > Why can't a nested_mmu use large pages? Oh, duh, you're just preserving the existing functionality. I dislike bool params, but I also don't see a better option at this time. To make it slightly less evil, add a wrapper so that the use and bool are closer together. And then the callers don't need to be updated. void __setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, bool use_hugepages); static inline void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len) { __setup_mmu_range(cr3, start, len, true); } And if you name it use_hugepages, then you can do: void __setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, bool nested_mmu) { u64 orig_opt_mask = pte_opt_mask; u64 max = (u64)len + (u64)start; u64 phys = start; /* comment goes here. */ pte_opt_mask |= PT_USER_MASK; if (use_hugepages) { while (phys + LARGE_PAGE_SIZE <= max) { install_large_page(cr3, phys, (void *)(ulong)phys); phys += LARGE_PAGE_SIZE; } } install_pages(cr3, phys, max - phys, (void *)(ulong)phys); pte_opt_mask = orig_opt_mask; }