On 2024/2/22 下午5:34, WANG Xuerui wrote:
On 2/17/24 11:15, maobibo wrote:
On 2024/2/15 下午6:25, WANG Xuerui wrote:
On 2/15/24 18:11, WANG Xuerui wrote:
Sorry for the late reply (and Happy Chinese New Year), and thanks
for providing microbenchmark numbers! But it seems the more
comprehensive CoreMark results were omitted (that's also absent in
v3)? While the
Of course the benchmark suite should be UnixBench instead of
CoreMark. Lesson: don't multi-task code reviews, especially not after
consuming beer -- a cup of coffee won't fully cancel the influence. ;-)
Where is rule about benchmark choices like UnixBench/Coremark for ipi
improvement?
Sorry for the late reply. The rules are mostly unwritten, but in general
you can think of the preference of benchmark suites as a matter of
"effectiveness" -- the closer it's to some real workload in the wild,
the better. Micro-benchmarks is okay for illustrating the points, but
without demonstrating the impact on realistic workloads, a change could
be "useless" in practice or even decrease various performance metrics
(be that throughput or latency or anything that matters in the certain
case), but get accepted without notice.
yes, micro-benchmark cannot represent the real world, however it does
not mean that UnixBench/Coremark should be run. You need to point out
what is the negative effective from code, or what is the possible real
scenario which may benefit. And points out the reasonable benchmark
sensitive for IPIs rather than blindly saying UnixBench/Coremark.
Regards
Bibo Mao