Sean Christopherson <seanjc@xxxxxxxxxx> writes:
On Tue, Jan 17, 2023, Ricardo Koller wrote:
On Tue, Nov 15, 2022 at 05:32:57PM +0000, Colton Lewis wrote:
> @@ -44,6 +47,18 @@ static struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
> /* Store all samples in a flat array so they can be easily sorted
later. */
> uint64_t latency_samples[SAMPLE_CAPACITY];
>
> +static uint64_t perf_test_timer_read(void)
> +{
> +#if defined(__aarch64__)
> + return timer_get_cntct(VIRTUAL);
> +#elif defined(__x86_64__)
> + return rdtsc();
> +#else
> +#warn __func__ " is not implemented for this architecture, will
return 0"
> + return 0;
> +#endif
> +}
I would prefer to put the guest-side timer helpers into common code, e.g.
as
guest_read_system_counter(), replacing system_counter_offset_test.c's
one-off
version.
Will do.
> /*
> * Continuously write to the first 8 bytes of each page in the
> * specified region.
> @@ -59,6 +74,10 @@ void perf_test_guest_code(uint32_t vcpu_idx)
> int i;
> struct guest_random_state rand_state =
> new_guest_random_state(pta->random_seed + vcpu_idx);
> + uint64_t *latency_samples_offset = latency_samples +
SAMPLES_PER_VCPU * vcpu_idx;
"offset" is confusing because the system counter (TSC in x86) has an
offset for
the guest-perceived value. Maybe just "latencies"?
Will do.
> + uint64_t count_before;
> + uint64_t count_after;
Maybe s/count/time? Yeah, it's technically wrong to call it "time",
but "count"
is too generic.
I could say "cycles".
> + uint32_t maybe_sample;
>
> gva = vcpu_args->gva;
> pages = vcpu_args->pages;
> @@ -75,10 +94,21 @@ void perf_test_guest_code(uint32_t vcpu_idx)
>
> addr = gva + (page * pta->guest_page_size);
>
> - if (guest_random_u32(&rand_state) % 100 < pta->write_percent)
> + if (guest_random_u32(&rand_state) % 100 < pta->write_percent) {
> + count_before = perf_test_timer_read();
> *(uint64_t *)addr = 0x0123456789ABCDEF;
> - else
> + count_after = perf_test_timer_read();
> + } else {
> + count_before = perf_test_timer_read();
> READ_ONCE(*(uint64_t *)addr);
> + count_after = perf_test_timer_read();
"count_before ... ACCESS count_after" could be moved to some macro,
e.g.,:
t = MEASURE(READ_ONCE(*(uint64_t *)addr));
Even better, capture the read vs. write in a local variable to
self-document the
use of the RNG, then the motivation for reading the system counter inside
the
if/else-statements goes away. That way we don't need to come up with a
name
that documents what MEASURE() measures.
write = guest_random_u32(&rand_state) % 100 < args->write_percent;
time_before = guest_system_counter_read();
if (write)
*(uint64_t *)addr = 0x0123456789ABCDEF;
else
READ_ONCE(*(uint64_t *)addr);
time_after = guest_system_counter_read();
Couldn't timing before and after the if statement produce bad
measurements? We might be including a branch mispredict in our memory
access latency and this could happen a lot because it's random so no way
for the CPU to predict.
> + }
> +
> + maybe_sample = guest_random_u32(&rand_state) % (i + 1);
No need to generate a random number for iterations that always sample.
And I
think it will make the code easier to follow if there is a single write
to the
array. The derivation of the index is what's interesting and different,
we should
use code to highlight that.
/*
* Always sample early iterations to ensure at least the
* number of requested samples is collected. Once the
* array has been filled, <here is a comment from Colton
* briefly explaining the math>.
*
if (i < SAMPLES_PER_VCPU)
idx = i;
else
idx = guest_random_u32(&rand_state) % (i + 1);
if (idx < SAMPLES_PER_VCPU)
latencies[idx] = time_after - time_before;
Will do.
> + if (i < SAMPLES_PER_VCPU)
Would it make sense to let the user configure the number of samples?
Seems easy
enough and would let the user ultimately decide how much memory to burn
on samples.
Theoretically users may wish to tweak the accuracy vs memory use
tradeoff. Seemed like a shakey value proposition to me because of
diminishing returns to increased accuracy, but I will include an option
if you insist.
> + latency_samples_offset[i] = count_after - count_before;
> + else if (maybe_sample < SAMPLES_PER_VCPU)
> + latency_samples_offset[maybe_sample] = count_after - count_before;
I would prefer these reservoir sampling details to be in a helper,
e.g.,:
reservoir_sample_record(t, i);
Heh, I vote to open code the behavior. I dislike fancy names that hide
relatively
simple logic. IMO, readers won't care how the modulo math provides even
distribution,
just that it does and that the early iterations always samples to ensure
ever bucket
is filled.
In this case, I can pretty much guarantee that I'd end up spending more
time
digging into what "reservoir" means than I would to understand the basic
flow.
I agree. Logic is simple enough more names will confuse as to what's
really happening.