David Matlack <dmatlack@xxxxxxxxxx> writes:
On Wed, Aug 17, 2022 at 09:41:45PM +0000, Colton Lewis wrote:
Randomize which tables are written vs read using the random number
arrays. Change the variable wr_fract and associated function calls to
write_percent that now operates as a percentage from 0 to 100 where X
means each page has an X% chance of being written. Change the -f
argument to -w to reflect the new variable semantics. Keep the same
default of 100 percent writes.
Doesn't the new option cause like a 1000x slowdown in "Dirty memory
time"? I don't think we should merge this until that is understood and
addressed (and it should be at least called out here so that reviewers
can be made aware).
I'm guessing you got that from my internally posted tests. This option
itself does not cause the slowdown. If this option is set to 0% or 100%
(the default), there is no slowdown at all. The slowdown I measured was
at 50%, probably because that makes branch prediction impossible because
it has an equal chance of doing a read or a write each time. This is a
good thing. It's much more realistic than predictably alternating read
and write.
I can see this would be worth mentioning.
@@ -433,10 +434,11 @@ int main(int argc, char *argv[])
case 'b':
guest_percpu_mem_size = parse_size(optarg);
break;
- case 'f':
- p.wr_fract = atoi(optarg);
- TEST_ASSERT(p.wr_fract >= 1,
- "Write fraction cannot be less than one");
+ case 'w':
+ perf_test_args.write_percent = atoi(optarg);
+ TEST_ASSERT(perf_test_args.write_percent >= 0
+ && perf_test_args.write_percent <= 100,
+ "Write percentage must be between 0 and 100");
perf_test_create_vm() overwrites this with 100. Did you mean
p.write_percent?
I did.