Avi Kivity wrote:
ehrhardt@xxxxxxxxxxxxxxxxxx wrote:
From: Carsten Otte <cotte@xxxxxxxxxx>
This patch fixes an incorrectness in the kvm backend for s390.
In case virtual cpus are being created before the corresponding
memory slot is being registered, we need to update the sie
control blocks for the virtual cpus. In order to do that, we
use the vcpu->mutex to lock out kvm_run and friends. This way
we can ensure a consistent update of the memory for the entire
smp configuration.
@@ -657,6 +657,8 @@ int kvm_arch_set_memory_region(struct kv
struct kvm_memory_slot old,
int user_alloc)
{
+ int i;
+
/* A few sanity checks. We can have exactly one memory slot
which has
to start at guest virtual zero and which has to be located at a
page boundary in userland and which has to end at a page
boundary.
@@ -676,13 +678,27 @@ int kvm_arch_set_memory_region(struct kv
if (mem->memory_size & (PAGE_SIZE - 1))
return -EINVAL;
+ /* lock all vcpus */
+ for (i = 0; i < KVM_MAX_VCPUS; ++i) {
+ if (kvm->vcpus[i])
+ mutex_lock(&kvm->vcpus[i]->mutex);
+ }
+
Can't that livelock? Nothing requires a vcpu to ever exit, and if the
cpu on which it's running on has no other load and no interrupts, it
could remain in guest mode indefinitely, and then the ioctl will hang,
waiting for something to happen.
Yes it could wait indefinitely - good spot.
On x86, we use slots_lock to protect memory slots. When we change the
global memory configuration, we set a bit in vcpu->requests, and send
an IPI to all cpus that are currently in guest mode for our guest.
This forces the cpu back to host mode. On the next entry, vcpu_run
notices vcpu->requests has the bit set and reloads the mmu
configuration. Of course, all this may be overkill for s390.
I thought about implementing it with slots_lock, vcpu->request, etc but
it really looks like overkill for s390.
At least today we can assume that we only have one memslot. Therefore a
set_memslot with already created vcpu's will still not interfere with
running vcpus (they can't run without memslot and since we have only one
they won't run).
Anyway I the code is prepared to "meet" running vcpus, because it might
be different in future. To prevent the livelock issue I changed the code
using mutex_trylock and in case I can't get the lock I explicitly let
the vcpu exit from guest.
--
Grüsse / regards,
Christian Ehrhardt
IBM Linux Technology Center, Open Virtualization
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html