On Wed, Nov 10, 2021 at 4:03 PM David Matlack <dmatlack@xxxxxxxxxx> wrote: > > From: Sean Christopherson <seanjc@xxxxxxxxxx> > > Assert that the GPA for a memslot backed by a hugepage is aligned to > the hugepage size and fix perf_test_util accordingly. Lack of GPA > alignment prevents KVM from backing the guest with hugepages, e.g. x86's > write-protection of hugepages when dirty logging is activated is > otherwise not exercised. > > Add a comment explaining that guest_page_size is for non-huge pages to > try and avoid confusion about what it actually tracks. > > Cc: Ben Gardon <bgardon@xxxxxxxxxx> > Cc: Yanan Wang <wangyanan55@xxxxxxxxxx> > Cc: Andrew Jones <drjones@xxxxxxxxxx> > Cc: Peter Xu <peterx@xxxxxxxxxx> > Cc: Aaron Lewis <aaronlewis@xxxxxxxxxx> > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > [Used get_backing_src_pagesz() to determine alignment dynamically.] > Signed-off-by: David Matlack <dmatlack@xxxxxxxxxx> > --- > tools/testing/selftests/kvm/lib/kvm_util.c | 2 ++ > tools/testing/selftests/kvm/lib/perf_test_util.c | 7 ++++++- > 2 files changed, 8 insertions(+), 1 deletion(-) > > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c > index 07f37456bba0..1f6a01c33dce 100644 > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > @@ -875,6 +875,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, > if (src_type == VM_MEM_SRC_ANONYMOUS_THP) > alignment = max(backing_src_pagesz, alignment); > > + ASSERT_EQ(guest_paddr, align_up(guest_paddr, backing_src_pagesz)); > + > /* Add enough memory to align up if necessary */ > if (alignment > 1) > region->mmap_size += alignment; > diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c > index 6b8d5020dc54..a015f267d945 100644 > --- a/tools/testing/selftests/kvm/lib/perf_test_util.c > +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c > @@ -55,11 +55,16 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, > { > struct kvm_vm *vm; > uint64_t guest_num_pages; > + uint64_t backing_src_pagesz = get_backing_src_pagesz(backing_src); > int i; > > pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); > > perf_test_args.host_page_size = getpagesize(); > + /* > + * Snapshot the non-huge page size. This is used by the guest code to > + * access/dirty pages at the logging granularity. > + */ > perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size; Is this comment correct? I wouldn't expect the guest page size to determine the host dirty logging granularity. > > guest_num_pages = vm_adjust_num_guest_pages(mode, > @@ -92,7 +97,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, > > guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * > perf_test_args.guest_page_size; > - guest_test_phys_mem = align_down(guest_test_phys_mem, perf_test_args.host_page_size); > + guest_test_phys_mem = align_down(guest_test_phys_mem, backing_src_pagesz); > #ifdef __s390x__ > /* Align to 1M (segment size) */ > guest_test_phys_mem = align_down(guest_test_phys_mem, 1 << 20); > -- > 2.34.0.rc1.387.gb447b232ab-goog >