On 01/05/2011 06:44 PM, Anthony Liguori wrote:
On 01/04/2011 03:39 PM, Marcelo Tosatti wrote:
On Tue, Jan 04, 2011 at 08:17:26AM -0600, Anthony Liguori wrote:
On 01/03/2011 04:01 AM, Avi Kivity wrote:
On 01/03/2011 11:46 AM, Jan Kiszka wrote:
Hi,
at least in kvm mode, the qemu_fair_mutex seems to have lost its
function of balancing qemu_global_mutex access between the
io-thread and
vcpus. It's now only taken by the latter, isn't it?
This and the fact that qemu-kvm does not use this kind of lock
made me
wonder what its role is and if it is still relevant in practice. I'd
like to unify the execution models of qemu-kvm and qemu, and this
lock
is the most obvious difference (there are surely more subtle ones as
well...).
IIRC it was used for tcg, which has a problem that kvm doesn't
have: a tcg vcpu needs to hold qemu_mutex when it runs, which
means there will always be contention on qemu_mutex. In the
absence of fairness, the tcg thread could dominate qemu_mutex and
starve the iothread.
No, it's actually the opposite IIRC.
TCG relies on the following behavior. A guest VCPU runs until 1)
it encounters a HLT instruction 2) an event occurs that forces the
TCG execution to break.
(2) really means that the TCG thread receives a signal. Usually,
this is the periodic timer signal.
When the TCG thread, it needs to let the IO thread run for at least
one iteration. Coordinating the execution of the IO thread such
that it's guaranteed to run at least once and then having it drop
the qemu mutex long enough for the TCG thread to acquire it is the
purpose of the qemu_fair_mutex.
Its the vcpu threads that starve the IO thread.
I'm not sure if this is a difference in semantics or if we're not
understanding each other.
I think, the latter.
With TCG, the VCPU thread will dominate the qemu_mutex and cause the
IO thread to contend heavily on it.
But the IO thread can always force TCG to exit it's loop (and does so
when leaving select()). So the TCG thread make keep the IO thread
hungry, but it never "starves" it.
With a pure qemu_mutex_acquire(), tcg does starve out iothread.
SIG_IPI/cpu_interrupt and qemu_fair_mutex, were introduced to solve this
starvation; kvm doesn't require them.
OTOH, the TCG thread struggles to hand over execution to the IO thread
while making sure that it gets back the qemu_mutex in a timely
fashion. That's the tricky part. Avi's point is that by giving up
the lock at select time, we prevent starvation but my concern is that
because the time between select intervals is unbounded (and
potentially very, very lock), it's effectively starvation.
It isn't starvation, since the iothread will eventually drain its work.
Suppose we do hand over to tcg while the iothread still has pending
work. What now? tcg will not drop the lock voluntarily. When will the
iothread complete its work?
Do we immediately interrupt tcg again? If so, why did we give it the lock?
Do we sleep for a while and then reaquire the lock? For how long?
AFAWCT, tcg may be spinning waiting for a completion.
There's simply no scope for an iothread->tcg handoff. The situation is
not symmetric, it's more a client/server relationship.
Zooming out for a bit, let's see what out options are:
- the current qemu_fair_mutex/SIG_IPI thing
- a priority lock, which simply encapsulates the current
qemu_fair_mutex. tcg is made to drop the lock whenever anyone else
attempts to acquire it. No change in behaviour, just coding.
- make tcg take the qemu lock only in helper code; make sure we only do
tcg things in the tcg thread (like playing with the tlb). No need for
special locking, but will reduce tcg throughput somewhat, my estimation
is measurably but not significantly.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html