Hi Marc,
I will consider more circumstances in the later test. Thanks for the
advice.
Thanks,
Jingyi
On 11/24/2020 7:02 PM, Marc Zyngier wrote:
On 2020-11-13 07:54, Jingyi Wang wrote:
Hi all,
Sorry for the delay. I have been testing the TWED feature performance
lately. We select unixbench as the benchmark for some items of it is
lock-intensive(fstime/fsbuffer/fsdisk). We run unixbench on a 4-VCPU
VM, and bind every two VCPUs on one PCPU. Fixed TWED value is used and
here is the result.
How representative is this?
TBH, I only know of two real world configurations: one where
the vCPUs are pinned to different physical CPUs (and in this
case your patch has absolutely no effect as long as there is
no concurrent tasks), and one where there is oversubscription,
and the scheduler moves things around as it sees fit, depending
on the load.
Having two vCPUs pinned per CPU feels like a test that has been
picked to give the result you wanted. I'd like to see the full
picture, including the case that matters for current use cases.
I'm specially interested in the cases where the system is
oversubscribed, because TWED is definitely going to screw with
the scheduler latency.
Thanks,
M.