On Wed, Aug 10, 2022 at 04:49:23PM -0700, David Matlack wrote: > On Wed, Aug 10, 2022 at 05:58:30PM +0000, Colton Lewis wrote: > > diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c > > index 3c7b93349fef..9838d1ad9166 100644 > > --- a/tools/testing/selftests/kvm/lib/perf_test_util.c > > +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c > > @@ -52,6 +52,9 @@ void perf_test_guest_code(uint32_t vcpu_idx) > > struct perf_test_vcpu_args *vcpu_args = &pta->vcpu_args[vcpu_idx]; > > uint64_t gva; > > uint64_t pages; > > + uint64_t addr; > > + bool random_access = pta->random_access; > > + bool populated = false; > > int i; > > > > gva = vcpu_args->gva; > > @@ -62,7 +65,11 @@ void perf_test_guest_code(uint32_t vcpu_idx) > > > > while (true) { > > for (i = 0; i < pages; i++) { > > - uint64_t addr = gva + (i * pta->guest_page_size); > > + if (populated && random_access) > > Skipping the populate phase makes sense to ensure everything is > populated I guess. What was your rational? That's it. Wanted to ensure everything was populated. Random population won't hit every page, but those unpopulated pages might be hit on subsequent iterations. I originally let population be random too and suspect this was driving an odd behavior I noticed early in testing where later iterations would be much faster than earlier ones. > Either way I think this policy should be driven by the test, rather than > harde-coded in perf_test_guest_code(). i.e. Move the call > perf_test_set_random_access() in dirty_log_perf_test.c to just after the > population phase. That makes sense. Will do.