On 6/19/2019 9:21 AM, Liran Alon wrote:
Commit d98f26073beb ("target/i386: kvm: add VMX migration blocker")
added migration blocker for vCPU exposed with Intel VMX because QEMU
doesn't yet contain code to support migration of nested virtualization
workloads.
However, that commit missed adding deletion of the migration blocker in
case init of vCPU failed. Similar to invtsc_mig_blocker. This commit fix
that issue.
Fixes: d98f26073beb ("target/i386: kvm: add VMX migration blocker")
Signed-off-by: Liran Alon <liran.alon@xxxxxxxxxx>
---
target/i386/kvm.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 3b29ce5c0d08..7aa7914a498c 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -940,7 +940,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
r = kvm_arch_set_tsc_khz(cs);
if (r < 0) {
- goto fail;
+ return r;
}
/* vcpu's TSC frequency is either specified by user, or following
@@ -1295,7 +1295,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
if (local_err) {
error_report_err(local_err);
error_free(invtsc_mig_blocker);
- return r;
+ goto fail2;
}
}
}
@@ -1346,6 +1346,9 @@ int kvm_arch_init_vcpu(CPUState *cs)
fail:
migrate_del_blocker(invtsc_mig_blocker);
+ fail2:
+ migrate_del_blocker(vmx_mig_blocker);
+
At the risk of being a bit pedantic...
Your changes don't introduce this problem, but they do make it worse --
Since [vmx|invtsc]_mig_blocker are both global in scope, isn't it
possible you end up deleting one or both valid blockers that were
created by a previous invocation of kvm_arch_init_vcpu() ? Seems to me
that you would need something like an additional pair of local boolean
variables named created_[vmx|invtsc]_mig_blocker and condition the calls
to migrate_del_blocker() accordingly. On the positive side, that would
simplify some of the logic around when and if it's ok to jump to "fail"
(and you wouldn't need the "fail2").
Thanks,
-Maran
return r;
}