> -----Original Message----- > From: kvm-owner@xxxxxxxxxxxxxxx [mailto:kvm-owner@xxxxxxxxxxxxxxx] > On Behalf Of Gleb Natapov > Sent: Sunday, April 28, 2013 10:34 PM > To: Ren, Yongjie > Cc: Jan Kiszka; Marcelo Tosatti; kvm; Nakajima, Jun > Subject: Re: [PATCH] KVM: nVMX: Skip PF interception check when queuing > during nested run > > On Sun, Apr 28, 2013 at 02:30:38PM +0000, Ren, Yongjie wrote: > > > -----Original Message----- > > > From: kvm-owner@xxxxxxxxxxxxxxx > [mailto:kvm-owner@xxxxxxxxxxxxxxx] > > > On Behalf Of Jan Kiszka > > > Sent: Sunday, April 28, 2013 3:25 PM > > > To: Gleb Natapov; Marcelo Tosatti > > > Cc: kvm; Nakajima, Jun; Ren, Yongjie > > > Subject: [PATCH] KVM: nVMX: Skip PF interception check when queuing > > > during nested run > > > > > > From: Jan Kiszka <jan.kiszka@xxxxxxxxxxx> > > > > > > While a nested run is pending, vmx_queue_exception is only called to > > > requeue exceptions that were previously picked up via > > > vmx_cancel_injection. Therefore, we must not check for PF interception > > > by L1, possibly causing a bogus nested vmexit. > > > > > > Signed-off-by: Jan Kiszka <jan.kiszka@xxxxxxxxxxx> > > > --- > > > > > > This and the KVM_REQ_IMMEDIATE_EXIT fix allows me to boot an L2 > Linux > > > without problems. Yongjie, please check if it resolves your issue(s) as > > > well. > > > > > The two patches can fix my issue. When both of them are applied, I can > have > > more tests against next branch. > They are both applied now. > There's some bug in Jan's patch "Rework request for immediate exit". When I said 2 patches can fix my issue, I meant his original two patches. "Check KVM_REQ_IMMEDIATE_EXIT after enable_irq_window" works for me. "Rework request for immediate exit" patch is buggy. In L1, I can get the following error. (also some NMI in L2.) (BTW, I'll have holidays this week. I may not track this issue this week.) [ 167.248015] sending NMI to all CPUs: [ 167.252260] NMI backtrace for cpu 1 [ 167.253007] CPU 1 [ 167.253007] Pid: 0, comm: swapper/1 Tainted: GF 3.8.5 #1 Bochs Bochs [ 167.253007] RIP: 0010:[<ffffffff81045606>] [<ffffffff81045606>] native_safe_halt+0x6/0x10 [ 167.253007] RSP: 0018:ffff880290d51ed8 EFLAGS: 00000246 [ 167.253007] RAX: 0000000000000000 RBX: ffff880290d50010 RCX: 0140000000000000 [ 167.253007] RDX: 0000000000000000 RSI: 0140000000000000 RDI: 0000000000000086 [ 167.253007] RBP: ffff880290d51ed8 R08: 0000000000000000 R09: 0000000000000000 [ 167.253007] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001 [ 167.253007] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [ 167.253007] FS: 0000000000000000(0000) GS:ffff88029fc40000(0000) knlGS:0000000000000000 [ 167.253007] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 167.253007] CR2: ffffffffff600400 CR3: 000000028f12d000 CR4: 00000000000427e0 [ 167.253007] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 167.253007] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 167.253007] Process swapper/1 (pid: 0, threadinfo ffff880290d50000, task ffff880290d49740) [ 167.253007] Stack: [ 167.253007] ffff880290d51ef8 ffffffff8101d5cf ffff880290d50010 ffffffff81ce0680 [ 167.253007] ffff880290d51f28 ffffffff8101ce99 ffff880290d51f18 1de4884102b62f69 [ 167.253007] 0000000000000000 0000000000000000 ffff880290d51f48 ffffffff81643595 [ 167.253007] Call Trace: [ 167.253007] [<ffffffff8101d5cf>] default_idle+0x4f/0x1a0 [ 167.253007] [<ffffffff8101ce99>] cpu_idle+0xd9/0x120 [ 167.253007] [<ffffffff81643595>] start_secondary+0x24c/0x24e [ 167.253007] Code: 00 00 00 00 00 55 48 89 e5 fa c9 c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb c9 c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <c9> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 c9 c3 66 0f 1f 84 [ 167.248015] NMI backtrace for cpu 3 [ 167.248015] CPU 3 [ 167.248015] Pid: 0, comm: swapper/3 Tainted: GF 3.8.5 #1 Bochs Bochs [ 167.248015] RIP: 0010:[<ffffffff810454ca>] [<ffffffff810454ca>] native_write_msr_safe+0xa/0x10 ....... -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html