On 6/16/2022 5:34 AM, Sean Christopherson wrote: > On Wed, Jun 15, 2022, Sean Christopherson wrote: >> On Thu, Apr 28, 2022, Manali Shukla wrote: >>> +void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, bool nested_mmu) >>> { >>> u64 max = (u64)len + (u64)start; >>> u64 phys = start; >>> >>> - while (phys + LARGE_PAGE_SIZE <= max) { >>> - install_large_page(cr3, phys, (void *)(ulong)phys); >>> - phys += LARGE_PAGE_SIZE; >>> - } >>> - install_pages(cr3, phys, max - phys, (void *)(ulong)phys); >>> + if (nested_mmu == false) { >>> + while (phys + LARGE_PAGE_SIZE <= max) { >>> + install_large_page(cr3, phys, (void *)(ulong)phys); >>> + phys += LARGE_PAGE_SIZE; >>> + } >>> + install_pages(cr3, phys, max - phys, (void *)(ulong)phys); >>> + } else { >>> + set_pte_opt_mask(); >>> + install_pages(cr3, phys, len, (void *)(ulong)phys); >>> + reset_pte_opt_mask(); >>> + } >> >> Why can't a nested_mmu use large pages? > > Oh, duh, you're just preserving the existing functionality. > > I dislike bool params, but I also don't see a better option at this time. To make > it slightly less evil, add a wrapper so that the use and bool are closer together. > And then the callers don't need to be updated. > > void __setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, bool use_hugepages); > > static inline void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len) > { > __setup_mmu_range(cr3, start, len, true); > } > > > And if you name it use_hugepages, then you can do: > > void __setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, bool nested_mmu) > { > u64 orig_opt_mask = pte_opt_mask; > u64 max = (u64)len + (u64)start; > u64 phys = start; > > /* comment goes here. */ > pte_opt_mask |= PT_USER_MASK; > > if (use_hugepages) { > while (phys + LARGE_PAGE_SIZE <= max) { > install_large_page(cr3, phys, (void *)(ulong)phys); > phys += LARGE_PAGE_SIZE; > } > } > install_pages(cr3, phys, max - phys, (void *)(ulong)phys); > > pte_opt_mask = orig_opt_mask; > } Hi Sean, Thank you so much for reviewing my changes. RSVD bit test case will start failing with above implementation as we will be setting PT_USER_MASK bit for all host PTEs (in order to toggle CR4.SMEP) which will defeat one of the purpose of this patch. Right now, pte_opt_mask value which is set from setup_vm(), is overwritten in setup_mmu_range() for all the conditions. How about setting PT_USER_MASK only for nested mmu in setup_mmu_range()? It will retain the same value of pte_opt_mask which is set from setup_vm() in all the other cases. #define IS_NESTED_MMU 1ULL #define USE_HUGEPAGES 2ULL void __setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, unsigned long mmu_flags) { u64 orig_opt_mask = pte_opt_mask; u64 max = (u64)len + (u64)start; u64 phys = start; /* Allocate 4k pages only for nested page table, PT_USER_MASK needs to * be enabled for nested page. */ if (mmu_flags & IS_NESTED_MMU) pte_opt_mask |= PT_USER_MASK; if (mmu_flags & USE_HUGEPAGES) { while (phys + LARGE_PAGE_SIZE <= max) { install_large_page(cr3, phys, (void *)(ulong)phys); phys += LARGE_PAGE_SIZE; } } install_pages(cr3, phys, max - phys, (void *)(ulong)phys); pte_opt_mask = orig_opt_mask; } static inline void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len) { __setup_mmu_range(cr3, start, len, USE_HUGEPAGES); } Thank you, Manali