Hi Colton, On Tue, Nov 15, 2022 at 05:32:56PM +0000, Colton Lewis wrote: > Allocate additional space for latency samples. This has been separated > out to call attention to the additional VM memory allocation. The test > runs out of physical pages without the additional allocation. The 100 > multiple for pages was determined by trial and error. A more > well-reasoned calculation would be preferable. > > Signed-off-by: Colton Lewis <coltonlewis@xxxxxxxxxx> > --- > tools/testing/selftests/kvm/lib/perf_test_util.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c > index 137be359b09e..a48904b64e19 100644 > --- a/tools/testing/selftests/kvm/lib/perf_test_util.c > +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c > @@ -38,6 +38,12 @@ static bool all_vcpu_threads_running; > > static struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; > > +#define SAMPLES_PER_VCPU 1000 > +#define SAMPLE_CAPACITY (SAMPLES_PER_VCPU * KVM_MAX_VCPUS) > + > +/* Store all samples in a flat array so they can be easily sorted later. */ > +uint64_t latency_samples[SAMPLE_CAPACITY]; > + > /* > * Continuously write to the first 8 bytes of each page in the > * specified region. > @@ -122,7 +128,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus, > { > struct perf_test_args *pta = &perf_test_args; > struct kvm_vm *vm; > - uint64_t guest_num_pages, slot0_pages = 0; > + uint64_t guest_num_pages, sample_pages, slot0_pages = 0; > uint64_t backing_src_pagesz = get_backing_src_pagesz(backing_src); > uint64_t region_end_gfn; > int i; > @@ -161,7 +167,9 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus, > * The memory is also added to memslot 0, but that's a benign side > * effect as KVM allows aliasing HVAs in meslots. > */ > - vm = __vm_create_with_vcpus(mode, nr_vcpus, slot0_pages + guest_num_pages, > + sample_pages = 100 * sizeof(latency_samples) / pta->guest_page_size; I don't think there's any need to guess. The number of accesses is vcpu_args->pages (one access per guest page). So all memory could be allocated dynamically to hold "vcpu_args->pages * sample_sz". > + vm = __vm_create_with_vcpus(mode, nr_vcpus, > + slot0_pages + guest_num_pages + sample_pages, > perf_test_guest_code, vcpus); > > pta->vm = vm; > -- > 2.38.1.431.g37b22c650d-goog > Thanks, Ricardo