Re: [PATCH] kvm: selftests: do not use bitfields larger than 32-bits for PTEs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 20, 2022, Paolo Bonzini wrote:
> ---
>  .../selftests/kvm/lib/x86_64/processor.c      | 202 ++++++++----------
>  1 file changed, 89 insertions(+), 113 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 9f000dfb5594..90c3d34ce80b 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -20,36 +20,18 @@
>  vm_vaddr_t exception_handlers;
>  
>  /* Virtual translation table structure declarations */

Stale comment.

> -struct pageUpperEntry {
> -	uint64_t present:1;
> -	uint64_t writable:1;
> -	uint64_t user:1;
> -	uint64_t write_through:1;
> -	uint64_t cache_disable:1;
> -	uint64_t accessed:1;
> -	uint64_t ignored_06:1;
> -	uint64_t page_size:1;
> -	uint64_t ignored_11_08:4;
> -	uint64_t pfn:40;
> -	uint64_t ignored_62_52:11;
> -	uint64_t execute_disable:1;
> -};
> -
> -struct pageTableEntry {
> -	uint64_t present:1;
> -	uint64_t writable:1;
> -	uint64_t user:1;
> -	uint64_t write_through:1;
> -	uint64_t cache_disable:1;
> -	uint64_t accessed:1;
> -	uint64_t dirty:1;
> -	uint64_t reserved_07:1;
> -	uint64_t global:1;
> -	uint64_t ignored_11_09:3;
> -	uint64_t pfn:40;
> -	uint64_t ignored_62_52:11;
> -	uint64_t execute_disable:1;
> -};
> +#define PTE_PRESENT_MASK  (1ULL << 0)
> +#define PTE_WRITABLE_MASK (1ULL << 1)
> +#define PTE_USER_MASK     (1ULL << 2)
> +#define PTE_ACCESSED_MASK (1ULL << 5)
> +#define PTE_DIRTY_MASK    (1ULL << 6)
> +#define PTE_LARGE_MASK    (1ULL << 7)
> +#define PTE_GLOBAL_MASK   (1ULL << 8)
> +#define PTE_NX_MASK       (1ULL << 63)

Any objection to using BIT_ULL()?  And tab(s) after the MASK so that there's some
breathing room in the unlikely scenario we need a new, longer flag?

> +#define PTE_PFN_MASK      0xFFFFFFFFFF000ULL

GENMASK_ULL(52, 12), or maybe use the PAGE_SHIFT in the generation, though I find
that more difficult to read for whatever reason.

> +#define PTE_PFN_SHIFT     12

Can we use the kernel's nomenclature?  That way if selftests ever find a way to
pull in arch/x86/include/asm/page_types.h, we don't need to do a bunch of renames.
And IMO it will make it easier to context switch between KVM and selftests.


#define PTE_PRESENT_MASK	BIT_ULL(0)
#define PTE_WRITABLE_MASK	BIT_ULL(1)
#define PTE_USER_MASK		BIT_ULL(2)
#define PTE_ACCESSED_MASK	BIT_ULL(5)
#define PTE_DIRTY_MASK		BIT_ULL(6)
#define PTE_LARGE_MASK		BIT_ULL(7)
#define PTE_GLOBAL_MASK		BIT_ULL(8)
#define PTE_NX_MASK		BIT_ULL(63)

#define PAGE_SHIFT		12
#define PAGE_SIZE		(1ULL << PAGE_SHIFT)
#define PHYSICAL_PAGE_MASK	GENMASK_ULL(51, 12)

#define PTE_GET_PFN(pte) (((pte) & PHYSICAL_PAGE_MASK) >> PAGE_SHIFT)


I'd also vote to move these (in a different patch) to processor.h so that selftests
can use PAGE_SIZE in particular.

  tools/testing/selftests/kvm/x86_64 $ git grep -E "define\s+PAGE_SIZE" | wc -l
  6

And _vm_get_page_table_entry() has several gross open-coded masks/shifts that can
be opportunistically converted now

@@ -269,8 +270,7 @@ static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid,
        struct kvm_cpuid_entry2 *entry;
        struct kvm_sregs sregs;
        int max_phy_addr;
-       /* Set the bottom 52 bits. */
-       uint64_t rsvd_mask = 0x000fffffffffffff;
+       uint64_t rsvd_mask = PHYSICAL_PAGE_MASK;
 
        entry = kvm_get_supported_cpuid_index(0x80000008, 0);
        max_phy_addr = entry->eax & 0x000000ff;
@@ -284,9 +284,8 @@ static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid,
         * the XD flag (bit 63) is reserved.
         */
        vcpu_sregs_get(vm, vcpuid, &sregs);
-       if ((sregs.efer & EFER_NX) == 0) {
-               rsvd_mask |= (1ull << 63);
-       }
+       if (!(sregs.efer & EFER_NX))
+               rsvd_mask |= PTE_NX_MASK;
 


and even more that can/should be cleaned up in the future, e.g. this pile, though
that can be left for a different day.

	/*
	 * Based on the mode check above there are 48 bits in the vaddr, so
	 * shift 16 to sign extend the last bit (bit-47),
	 */
	TEST_ASSERT(vaddr == (((int64_t)vaddr << 16) >> 16),
		"Canonical check failed.  The virtual address is invalid.");

	index[0] = (vaddr >> 12) & 0x1ffu;
	index[1] = (vaddr >> 21) & 0x1ffu;
	index[2] = (vaddr >> 30) & 0x1ffu;
	index[3] = (vaddr >> 39) & 0x1ffu;
  
> -	return (pte[index[0]].pfn * vm->page_size) + (gva & 0xfffu);
> +	return (PTE_GET_PFN(pte[index[0]]) * vm->page_size) + (gva & 0xfffu);

Yeesh, and yet more cleanup.  Probably worth adding

#define PAGE_MASK		(~(PAGE_SIZE-1))

and using ~PAGE_MASK here?  Or defining PAGE_OFFSET?  Though that would conflict
with the kernel's use of PAGE_OFFSET.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux