On 12/01/2010 02:07 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
The pause loop exiting& directed yield patches I am working on
preserve inter-vcpu fairness by round robining among the vcpus
inside one KVM guest.
I don't necessarily think that's enough.
Suppose you've got 4 vcpus, one is holding a lock and 3 are spinning.
They'll end up all three donating some time to the 4th.
The only way to make that fair again is if due to future contention the
4th cpu donates an equal amount of time back to the resp. cpus it got
time from. Guest lock patterns and host scheduling don't provide this
guarantee.
You have no guarantees when running virtualized, guest
CPU time could be taken away by another guest just as
easily as by another VCPU.
Even if we equalized the amount of CPU time each VCPU
ends up getting across some time interval, that is no
guarantee they get useful work done, or that the time
gets fairly divided to _user processes_ running inside
the guest.
The VCPU could be running something lock-happy when
it temporarily gives up the CPU, and get extra CPU time
back when running something userspace intensive.
In-between, it may well have scheduled to another task
(allowing it to get more CPU time).
I'm not convinced the kind of fairness you suggest is
possible or useful.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html