[PATCH] Don't expose hypervisor bit when running nested SVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hyper-V refuses to run in hypervisor mode when it finds the hypervisor bit
set, because it assumes it's running as a guest.

While the perfect way of not setting the hypervisor would be an option to the
-cpu parameter, this is reasonable sane for now. Let's deal with the -cpu
way when we get to -cpu host.

Signed-off-by: Alexander Graf <agraf@xxxxxxx>
---
 target-i386/helper.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/target-i386/helper.c b/target-i386/helper.c
index 2c5af3c..7da0e24 100644
--- a/target-i386/helper.c
+++ b/target-i386/helper.c
@@ -1513,7 +1513,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
         *edx = env->cpuid_features;
 
         /* "Hypervisor present" bit required for Microsoft SVVP */
-        if (kvm_enabled())
+        if (kvm_enabled() && !kvm_nested)
             *ecx |= (1 << 31);
         break;
     case 2:
-- 
1.6.0.2

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux