On 6/19/2019 1:33 PM, Liran Alon wrote:
On 19 Jun 2019, at 23:30, Maran Wilson <maran.wilson@xxxxxxxxxx> wrote:
On 6/19/2019 9:21 AM, Liran Alon wrote:
Commit d98f26073beb ("target/i386: kvm: add VMX migration blocker")
added migration blocker for vCPU exposed with Intel VMX because QEMU
doesn't yet contain code to support migration of nested virtualization
workloads.
However, that commit missed adding deletion of the migration blocker in
case init of vCPU failed. Similar to invtsc_mig_blocker. This commit fix
that issue.
Fixes: d98f26073beb ("target/i386: kvm: add VMX migration blocker")
Signed-off-by: Liran Alon <liran.alon@xxxxxxxxxx>
---
target/i386/kvm.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 3b29ce5c0d08..7aa7914a498c 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -940,7 +940,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
r = kvm_arch_set_tsc_khz(cs);
if (r < 0) {
- goto fail;
+ return r;
}
/* vcpu's TSC frequency is either specified by user, or following
@@ -1295,7 +1295,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
if (local_err) {
error_report_err(local_err);
error_free(invtsc_mig_blocker);
- return r;
+ goto fail2;
}
}
}
@@ -1346,6 +1346,9 @@ int kvm_arch_init_vcpu(CPUState *cs)
fail:
migrate_del_blocker(invtsc_mig_blocker);
+ fail2:
+ migrate_del_blocker(vmx_mig_blocker);
+
At the risk of being a bit pedantic...
Your changes don't introduce this problem, but they do make it worse -- Since [vmx|invtsc]_mig_blocker are both global in scope, isn't it possible you end up deleting one or both valid blockers that were created by a previous invocation of kvm_arch_init_vcpu() ? Seems to me that you would need something like an additional pair of local boolean variables named created_[vmx|invtsc]_mig_blocker and condition the calls to migrate_del_blocker() accordingly. On the positive side, that would simplify some of the logic around when and if it's ok to jump to "fail" (and you wouldn't need the "fail2").
Thanks,
-Maran
I actually thought about this as-well when I encountered this issue.
In general one can argue that every vCPU should introduce it’s own migration blocker on init (if required) and remove it’s own migration blocker on deletion (on vCPU destroy).
On 99% of the time, all of this shouldn’t matter as all vCPUs of a given VM have exactly the same properties.
The example I was thinking about is a VM that is created with a bunch of
vCPUs -- all of which require installation of the blocker(s). Then at
some point in the future, a failed CPU hotplug attempt wipes out all the
pre-existing blockers and leaves the VM exposed.
But I agree that the problem wasn't introduced by this patch series and
so it is reasonable to argue that you shouldn't have to fix it here. As
such:
Reviewed-by: Maran Wilson <maran.wilson@xxxxxxxxxx>
Thanks,
-Maran
Anyway, I decided that this is entirely not relevant for this patch-series and therefore if this should be addressed further, let it be another unrelated patch-series.
QEMU have too many issues to fix all at once :P. I need to filter.
-Liran
return r;
}