Avi Kivity wrote:
Christian Ehrhardt wrote:
Avi Kivity wrote:
Christian Ehrhardt wrote:
The bad thing on vcpu->request in that case is that I don't want
the async behaviour of vcpu->requests in that case, I want the
memory slot updated in all vcpu's when the ioctl is returning.
You mean, the hardware can access the vcpu control block even when
the vcpu is not running?
No, hardware only uses it with a running vcpu, but I realised my own
fault while changing the code to vcpu->request style.
For s390 I need to update the KVM->arch and *all*
vcpu->arch->sie_block... data synchronously.
Out of interest, can you explain why?
Sure I'll try to give an example.
a) The whole guest has "one" memory slot representing all it's memory.
Therefore some important values like guest_origin and guest_memsize (one
slot so it's just addr+size) are kept at VM level in kvm->arch.
b) We fortunately have cool hardware support for "nearly everything"(tm)
:-) In this case for example we set in vcpu->arch.sie_block the values
for origin and size translated into a "limit" to get memory management
virtualization support.
c) we have other code e.g. all our copy_from/to_guest stuff that uses
the kvm->arch values
If we would allow e.g. updates of a memslot (or as the patch supposes to
harden the set_memory_region code against inconsiderate code changes in
other sections) it might happen that we set the kvm->arch information
but the vcpu->arch->sie_block stuff not until next reentry. Now
concurrently the running vcpu could cause some kind of fault that
involves a copy_from/to_guest. That way we could end up with potentially
invalid handling of that fault (fault handling and running guest would
use different userspace adresses until it is synced on next vcpu
reentry) - it's theoretical I know, but it might cause some issues that
would be hard to find.
On the other hand for the long term I wanted to note that all our
copy_from/to_guest functions is per vcpu, so when we some day implement
updateable memslots, multiple memslots or even just fill "free time"(tm)
and streamline our code we could redesign that origin/size storage. This
could be done multiple ways, either just store it per vcpu or with a
lock for the kvm->arch level variables - both ways and maybe more could
then use the vcpu->request based approach, but unfortunately it's
neither part of that patch nor of the current effort to do that.
The really good thing is, because of our discussion about that I now
have a really detailed idea how I can improve that code aside from this
bugfix patch (lets hope not too far in the future).
That makes the "per vcpu resync on next entry" approach not feasible.
On the other hand I realized at the same moment that the livelock
should be no issue for us, because as I mentioned:
a) only one memslot
b) a vcpu can't run without memslot
So I don't even need to kick out vcpu's, they just should not be
running.
Until we ever support multiple slots, or updates of the existing
single slot this should be ok, so is the bugfix patch this should be.
To avoid a theoretical deadlock in case other code is changing
(badly) it should be fair to aquire the lock with mutex_trylock and
return -EINVAL if we did not get all locks.
OK.
--
Grüsse / regards,
Christian Ehrhardt
IBM Linux Technology Center, Open Virtualization
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html