On 03/29/2012 03:28 PM, Avi Kivity wrote:
On 03/28/2012 08:21 PM, Raghavendra K T wrote:
Looks like a good baseline on which to build the KVM
implementation. We
might need some handshake to prevent interference on the host
side with
the PLE code.
I think I still missed some point in Avi's comment. I agree that PLE
may be interfering with above patches (resulting in less performance
advantages). but we have not seen performance degradation with the
patches in earlier benchmarks. [ theoretically since patch has very
slight advantage over PLE that atleast it knows who should run next ].
The advantage grows with the vcpu counts and overcommit ratio. If you
have N vcpus and M:1 overcommit, PLE has to guess from N/M queued vcpus
while your patch knows who to wake up.
Yes. I agree.
So TODO in my list on this is:
1. More analysis of performance on PLE mc.
2. Seeing how to implement handshake to increase performance (if PLE +
patch combination have slight negative effect).
I can think of two options:
I really like below ideas. Thanks for that!.
- from the PLE handler, don't wake up a vcpu that is sleeping because it
is waiting for a kick
How about, adding another pass in the beginning of kvm_vcpu_on_spin()
to check if any vcpu is already kicked. This would almost result in
yield_to(kicked_vcpu). IMO this is also worth trying.
will try above ideas soon.
- look at other sources of pause loops (I guess smp_call_function() is
the significant source) and adjust them to use the same mechanism, and
ask the host to disable PLE exiting.
This can be done incrementally later.
Yes.. this can wait a bit.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html