I have already discussed this a bit with Nadav but hoping someone else has any other ideas/clues/suggestions/comments. With recent versions of the kernel (The last I tried is 3.0-rc5 with nVMX patches already merged), my L1 guest always hangs when I start L2. My setup : The host, L1 and L2 all are FC15 with the host running 3.0-rc5. When L1 is up and running, I start L2 from L1. Within a minute or two, both L1 and L2 hang. Although, if if I run tracing on the host, I see : ... qemu-kvm-19756 [013] 153774.856178: kvm_exit: reason APIC_ACCESS rip 0xffffffff81025098 info 1380 0 qemu-kvm-19756 [013] 153774.856189: kvm_exit: reason VMREAD rip 0xffffffffa00d5127 info 0 0 qemu-kvm-19756 [013] 153774.856191: kvm_exit: reason VMREAD rip 0xffffffffa00d5127 info 0 0 ... My point being that I only see kvm_exit messages but no kvm_entry. Does this mean that the VCPUs are somehow stuck in L2 ? Anyway, since this setup was running fine for me on older kernels, and I couldn't identify any significant changes in nVMX, I sifted through the other KVM changes and found this : -- commit 1aa8ceef0312a6aae7dd863a120a55f1637b361d Author: Nikola Ciprich <extmaillist@xxxxxxxxxxx> Date: Wed Mar 9 23:36:51 2011 +0100 KVM: fix kvmclock regression due to missing clock update commit 387b9f97750444728962b236987fbe8ee8cc4f8c moved kvm_request_guest_time_update(vcpu), breaking 32bit SMP guests using kvm-clock. Fix this by moving (new) clock update function to proper place. Signed-off-by: Nikola Ciprich <nikola.ciprich@xxxxxxxxxxx> Acked-by: Zachary Amsden <zamsden@xxxxxxxxxx> Signed-off-by: Avi Kivity <avi@xxxxxxxxxx> index 01f08a6..f1e4025 100644 (file) --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2127,8 +2127,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (check_tsc_unstable()) { kvm_x86_ops->adjust_tsc_offset(vcpu, -tsc_delta); vcpu->arch.tsc_catchup = 1; - kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); } + kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); if (vcpu->cpu != cpu) kvm_migrate_timers(vcpu); vcpu->cpu = cpu; -- If I revert this change, my L1/L2 guests run fine. This ofcourse, just hides the bug because on my machine, check_tsc_unstable() returns false. I found out from Nadav that when KVM decides to run L2, it will write vmcs01->tsc_offset + vmcs12->tsc_offset to the active TSC_OFFSET which seems right. But I verified that, if instead, I just write vmcs01->tsc_offset to TSC_OFFSET in prepare_vmcs02(), I don't see the bug anymore. Not sure where to go from here. I would appreciate if any one has any ideas. Bandan -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html