Avi Kivity wrote:
ehrhardt@xxxxxxxxxxxxxxxxxx wrote:
From: Christian Ehrhardt <ehrhardt@xxxxxxxxxxxxxxxxxx>
This patch relocates the variables kvm-s390 uses to track guest mem
addr/size.
As discussed dropping the variables at struct kvm_arch level allows
to use the
common vcpu->request based mechanism to reload guest memory if e.g.
changes
via set_memory_region.
The kick mechanism introduced in this series is used to ensure
running vcpus
leave guest state to catch the update.
rerun_vcpu:
+ if (vcpu->requests)
+ if (test_and_clear_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests))
+ kvm_s390_vcpu_set_mem(vcpu);
+
/* verify, that memory has been registered */
- if (!vcpu->kvm->arch.guest_memsize) {
+ if (!vcpu->arch.sie_block->gmslm) {
vcpu_put(vcpu);
+ VCPU_EVENT(vcpu, 3, "%s", "no memory registered to run vcpu");
return -EINVAL;
}
x86 uses a double check: first we check vcpu->requests outside atomic
context, then we enter the critical section and check again for
signals and vcpu->requests.
This allows us (a) to do naughty things in vcpu->requests handlers,
(b) keep the critical section short.
Does this apply here?
The patch already keeps the critical inner loop clear of extra code.
The check for vcpu->requests I added is only reached by either a
heavyweight (userspace) exit/reentry or the explicit kickout of a vcpu
to this label. Therefore weit fulfills a+b as you mentioned them above.
Additionally the s390 reload is very rare as well as fast, therefore it
would not even be an issue.
- /* update sie control blocks, and unlock all vcpus */
+ /* request update of sie control block for all available vcpus */
for (i = 0; i < KVM_MAX_VCPUS; ++i) {
if (kvm->vcpus[i]) {
- kvm->vcpus[i]->arch.sie_block->gmsor =
- kvm->arch.guest_origin;
- kvm->vcpus[i]->arch.sie_block->gmslm =
- kvm->arch.guest_memsize +
- kvm->arch.guest_origin +
- VIRTIODESCSPACE - 1ul;
- mutex_unlock(&kvm->vcpus[i]->mutex);
+ set_bit(KVM_REQ_MMU_RELOAD, &kvm->vcpus[i]->requests);
+ kvm_s390_inject_sigp_stop(kvm->vcpus[i],
+ ACTION_RELOADVCPU_ON_STOP);
}
}
There already exists a loop which does this, see
make_all_cpus_request(). It uses an IPI (Marcelo, can't it use the
reschedule interrupt?). It has a couple of optimizations -- if the
request is already set, it skips the IPI, and it avoids the IPI for
vcpus out of guest mode. Maybe it could fit s390 too.
I assume that the IPI on x86 is a implicit consequence of the
smp_call_function_many function, but I think this doesn't work that way
for us. The kick implied by that call would be recieved, but not reach
the code the checke vcpu->request. I could add that behaviour, but that
could make our normal interrupt handling much slower. Therefore I don't
want to call that function, but on the other hand I like the "skip if
the request is already set" functionality and think about adding that in
my loop.
--
Grüsse / regards, Christian Ehrhardt
IBM Linux Technology Center, Open Virtualization
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html