Re: [kvm-unit-tests PATCH v4 4/8] x86: Improve set_mmu_range() to implement npt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 28, 2022, Manali Shukla wrote:
> If U/S bit is "0" for all page table entries, all these pages are
> considered as supervisor pages. By default, pte_opt_mask is set to "0"
> for all npt test cases, which sets U/S bit in all PTEs to "0".
> 
> Any nested page table accesses performed by the MMU are treated as user
> acesses. So while implementing a nested page table dynamically, PT_USER_MASK
> needs to be enabled for all npt entries.

Bits in PTEs aren't really "enabled".

> set_mmu_range() function is improved based on above analysis.

Same comments.  Don't provide analysis and then say "do that", just state what
the patch does.  Background details are great, but first and foremost the reader
needs to know what the patch actually does.

> Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx>
> Signed-off-by: Manali Shukla <manali.shukla@xxxxxxx>
> ---
>  lib/x86/vm.c | 37 +++++++++++++++++++++++++++----------
>  lib/x86/vm.h |  3 +++
>  2 files changed, 30 insertions(+), 10 deletions(-)
> 
> diff --git a/lib/x86/vm.c b/lib/x86/vm.c
> index 25a4f5f..b555d5b 100644
> --- a/lib/x86/vm.c
> +++ b/lib/x86/vm.c
> @@ -4,7 +4,7 @@
>  #include "alloc_page.h"
>  #include "smp.h"
>  
> -static pteval_t pte_opt_mask;
> +static pteval_t pte_opt_mask, prev_pte_opt_mask;
>  
>  pteval_t *install_pte(pgd_t *cr3,
>  		      int pte_level,
> @@ -140,16 +140,33 @@ bool any_present_pages(pgd_t *cr3, void *virt, size_t len)
>  	return false;
>  }
>  
> -static void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len)
> +void set_pte_opt_mask()
> +{
> +        prev_pte_opt_mask = pte_opt_mask;
> +        pte_opt_mask = PT_USER_MASK;
> +}
> +
> +void reset_pte_opt_mask()

These should have "void" parameters.  I'm surprised gcc doesn't complain.  But
that's a moot point, because these don't need to exist, just to the save/mod/restore
on the stack in setup_mmu_range().

> +{
> +        pte_opt_mask = prev_pte_opt_mask;
> +}
> +
> +void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, bool nested_mmu)
>  {
>  	u64 max = (u64)len + (u64)start;
>  	u64 phys = start;
>  
> -	while (phys + LARGE_PAGE_SIZE <= max) {
> -		install_large_page(cr3, phys, (void *)(ulong)phys);
> -		phys += LARGE_PAGE_SIZE;
> -	}
> -	install_pages(cr3, phys, max - phys, (void *)(ulong)phys);
> +        if (nested_mmu == false) {
> +                while (phys + LARGE_PAGE_SIZE <= max) {
> +                        install_large_page(cr3, phys, (void *)(ulong)phys);
> +		        phys += LARGE_PAGE_SIZE;
> +	        }
> +	        install_pages(cr3, phys, max - phys, (void *)(ulong)phys);
> +        } else {
> +                set_pte_opt_mask();
> +                install_pages(cr3, phys, len, (void *)(ulong)phys);
> +                reset_pte_opt_mask();
> +        }

Why can't a nested_mmu use large pages?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux