On 05/11/2014 4:07, Tiejun Chen wrote: > >>> A bug was reported as follows: when running Windows 7 32-bit guests > >>> on qemu-kvm, sometimes the guests run into blue screen during > >>> reboot. The problem was that a guest's RVI was not cleared when it > >>> rebooted. This > >> patch has fixed the problem. > >>> > >>> Signed-off-by: Wei Wang <wei.w.wang@xxxxxxxxx> > >>> Signed-off-by: Yang Zhang <yang.z.zhang@xxxxxxxxx> > >>> Tested-by: Rongrong Liu <rongrongx.liu@xxxxxxxxx>, Da Chun > >>> <ngugc@xxxxxx> > >>> --- > >>> arch/x86/kvm/lapic.c | 3 +++ > >>> arch/x86/kvm/vmx.c | 12 ++++++------ > >>> 2 files changed, 9 insertions(+), 6 deletions(-) > >>> > >>> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index > >>> 66dd173..6942742 100644 > >>> --- a/arch/x86/kvm/lapic.c > >>> +++ b/arch/x86/kvm/lapic.c > >>> @@ -1712,6 +1712,9 @@ void kvm_apic_post_state_restore(struct > >> kvm_vcpu *vcpu, > >>> apic->isr_count = kvm_apic_vid_enabled(vcpu->kvm) ? > >>> 1 : count_vectors(apic->regs + APIC_ISR); > >>> apic->highest_isr_cache = -1; > >>> + if (kvm_x86_ops->hwapic_irr_update) > >>> + kvm_x86_ops->hwapic_irr_update(vcpu, > >>> + apic_find_highest_irr(apic)); > >> > >> Could we pass 0 directly? Because looks we just need to clear RVI. > >> > >> kvm_x86_ops->hwapic_irr_update(vcpu, 0); > >> > >> I think this already makes sense based on your description. > >> > >> Thanks > >> Tiejun > > > > No. This is a restore function, and we cannot assume that the callers > always need to reset to the initial state. > > Okay. Maybe I'm confused by the following change. > > > > > Wei > >> > >>> kvm_x86_ops->hwapic_isr_update(vcpu->kvm, > >> apic_find_highest_isr(apic)); > >>> kvm_make_request(KVM_REQ_EVENT, vcpu); > >>> kvm_rtc_eoi_tracking_restore_one(vcpu); > >>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index > >>> fe4d2f4..d632548 100644 > >>> --- a/arch/x86/kvm/vmx.c > >>> +++ b/arch/x86/kvm/vmx.c > >>> @@ -7292,19 +7292,19 @@ static void vmx_set_rvi(int vector) > >>> static void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr) > >>> { > >>> if (max_irr == -1) > >>> + max_irr = 0; > >>> + > >>> + if (!is_guest_mode(vcpu)) { > >>> + vmx_set_rvi(max_irr); > >>> return; > >>> + } > >>> > >>> /* > >>> * If a vmexit is needed, vmx_check_nested_events handles it. > >>> */ > >>> - if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu)) > >>> + if ((is_guest_mode(vcpu) && nested_exit_on_intr(vcpu)) || max_irr > >> == > >>> +0) > > Its really not readable to modify max_irr as 0 and check that here, and > especially when you read the original comment. > > So what about this? I think both are ok. If we zero max_irr in vmx_set_rvi(), we still need this check: if ((is_guest_mode(vcpu) && nested_exit_on_intr(vcpu)) || max_irr == -1) Let's see if Paolo has any comments, then send out a second version. Wei > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index > 0cd99d8..bc4558b 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -7280,6 +7280,9 @@ static void vmx_set_rvi(int vector) > u16 status; > u8 old; > > + if (vector == -1) > + vector = 0; > + > status = vmcs_read16(GUEST_INTR_STATUS); > old = (u8)status & 0xff; > if ((u8)vector != old) { > @@ -7291,9 +7294,6 @@ static void vmx_set_rvi(int vector) > > static void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr) > { > - if (max_irr == -1) > - return; > - > /* > * If a vmexit is needed, vmx_check_nested_events handles it. > */ > @@ -7305,6 +7305,9 @@ static void vmx_hwapic_irr_update(struct > kvm_vcpu *vcpu, int max_irr) > return; > } > > + if (max_irr == -1) > + return; > + > /* > * Fall back to pre-APICv interrupt injection since L2 > * is run without virtual interrupt delivery. > > > Thanks > Tiejun > > >>> return; > >>> > >>> - if (!is_guest_mode(vcpu)) { > >>> - vmx_set_rvi(max_irr); > >>> - return; > >>> - } > >>> - > >>> /* > >>> * Fall back to pre-APICv interrupt injection since L2 > >>> * is run without virtual interrupt delivery. > >>> > > > > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html