On Thu, Sep 5, 2024 at 9:42 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Thu, Sep 05, 2024, James Houghton wrote: > > On Fri, Aug 9, 2024 at 12:43 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > > > Create mmu_stress_tests's VM with the correct number of extra pages needed > > > to map all of memory in the guest. The bug hasn't been noticed before as > > > the test currently runs only on x86, which maps guest memory with 1GiB > > > pages, i.e. doesn't need much memory in the guest for page tables. > > > > > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > > > --- > > > tools/testing/selftests/kvm/mmu_stress_test.c | 8 +++++++- > > > 1 file changed, 7 insertions(+), 1 deletion(-) > > > > > > diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c > > > index 847da23ec1b1..5467b12f5903 100644 > > > --- a/tools/testing/selftests/kvm/mmu_stress_test.c > > > +++ b/tools/testing/selftests/kvm/mmu_stress_test.c > > > @@ -209,7 +209,13 @@ int main(int argc, char *argv[]) > > > vcpus = malloc(nr_vcpus * sizeof(*vcpus)); > > > TEST_ASSERT(vcpus, "Failed to allocate vCPU array"); > > > > > > - vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus); > > > + vm = __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, > > > +#ifdef __x86_64__ > > > + max_mem / SZ_1G, > > > +#else > > > + max_mem / vm_guest_mode_params[VM_MODE_DEFAULT].page_size, > > > +#endif > > > + guest_code, vcpus); > > > > Hmm... I'm trying to square this change with the logic in > > vm_nr_pages_required(). > > vm_nr_pages_required() mostly operates on the number of pages that are needed to > setup the VM, e.g. for vCPU stacks. The one calculation that guesstimates the > number of bytes needed, ucall_nr_pages_required(), does the same thing this code > does: divide the number of total bytes by bytes-per-page. Oh, yes, you're right. It's only accounting for the page tables for the 512 pages for memslot 0. Sorry for the noise. Feel free to add: Reviewed-by: James Houghton <jthoughton@xxxxxxxxxx>