On 2017-11-16 05:31, Konrad Rzeszutek Wilk wrote:
On Mon, Nov 13, 2017 at 06:05:59PM +0800, Quan Xu wrote:
From: Yang Zhang <yang.zhang.wz@xxxxxxxxx>
Some latency-intensive workload have seen obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost I have seen is
inside idle path.
Meaning an VMEXIT b/c it is an 'halt' operation ? And then going
back in guest (VMRESUME) takes time. And hence your latency gets
all whacked b/c of this?
Konrad, I can't follow 'b/c' here.. sorry.
So if I understand - you want to use your _full_ timeslice (of the guest)
without ever (or as much as possible) to go in the hypervisor?
as much as possible.
Which means in effect you don't care about power-saving or CPUfreq
savings, you just want to eat the full CPU for snack?
actually, we care about power-saving. The poll duration is
self-tuning, otherwise it is almost as the same as
'halt=poll'. Also we always sent out with CPU usage of benchmark
netperf/ctxsw. We got much more
performance with limited promotion of CPU usage.
This patch introduces a new mechanism to poll for a while before
entering idle state. If schedule is needed during poll, then we
don't need to goes through the heavy overhead path.
Schedule of what? The guest or the host?
rescheduled of guest scheduler..
it is the guest.
Quan
Alibaba Cloud
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization