Avi Kivity wrote:
Christian Ehrhardt wrote:
On x86, we use slots_lock to protect memory slots. When we change
the global memory configuration, we set a bit in vcpu->requests, and
send an IPI to all cpus that are currently in guest mode for our
guest. This forces the cpu back to host mode. On the next entry,
vcpu_run notices vcpu->requests has the bit set and reloads the mmu
configuration. Of course, all this may be overkill for s390.
I thought about implementing it with slots_lock, vcpu->request, etc
but it really looks like overkill for s390.
We could make (some of) it common code, so it won't look so bad.
There's value in having all kvm ports do things similarly; though of
course we shouldn't force the solution when it isn't really needed.
vcpu->requests is useful whenever we modify global VM state that needs
to be seen by all vcpus in host mode; see kvm_reload_remote_mmus().
yeah I read that code after your first hint in that thread, and I agree
that merging some of this into common code might be good.
But in my opinion not now for this bugfix patch (the intention is just
to prevent a user being able to crash the host via vcpu create,set mem&
and vcpu run in that order).
It might be a good point to further streamline this once we use the same
userspace code, but I think it doesn't make sense yet.
At least today we can assume that we only have one memslot. Therefore
a set_memslot with already created vcpu's will still not interfere
with running vcpus (they can't run without memslot and since we have
only one they won't run).
Anyway I the code is prepared to "meet" running vcpus, because it
might be different in future. To prevent the livelock issue I changed
the code using mutex_trylock and in case I can't get the lock I
explicitly let the vcpu exit from guest.
Why not do it unconditionally?
hmm I might have written that misleading - eventually it's a loop until
it got the lock
while !trylock
kick vcpu out of guest
schedule
There is no reason to kick out guests where I got the lock cleanly as
far as I see.
Especially as I expect the vcpus not running in the common case as i
explained above (can't run without memslot + we only have one => no vcpu
will run).
--
Grüsse / regards,
Christian Ehrhardt
IBM Linux Technology Center, Open Virtualization
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html