Re: [PATCH] KVM: x86: Rework INIT and SIPI handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2013-03-13 13:29, Gleb Natapov wrote:
> On Wed, Mar 13, 2013 at 12:36:58PM +0100, Jan Kiszka wrote:
>> On 2013-03-13 12:16, Jan Kiszka wrote:
>>>>> @@ -5871,8 +5867,8 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
>>>>>                       srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
>>>>>                       kvm_vcpu_block(vcpu);
>>>>>                       vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
>>>>> -                     if (kvm_check_request(KVM_REQ_UNHALT, vcpu))
>>>>> -                     {
>>>>> +                     if (kvm_check_request(KVM_REQ_UNHALT, vcpu)) {
>>>>> +                             kvm_apic_accept_events(vcpu);
>>>> I think we can drop this. If INIT happens while vcpu is halted it will
>>>> become runnable here and kvm_apic_accept_events() will be called in
>>>> vcpu_enter_guest().
>>>
>>> I'm not that sure, but I will recheck carefully.
>>
>> Doesn't work: If the state was INIT_RECEIVED, we will not process the
>> SIPI but reenter kvm_vcpu_block.
> Which raises the question. What if vcpu is in INIT_RECEIVED and it
> receives NMI. It will make kvm_arch_vcpu_runnable() return true but code
> will get back to kvm_vcpu_block() again.

Sounds like we had a bug in this area before. This patch won't improve
it yet. We need to "block NMIs" while in wait-for-sipi state.

BTW, I'v just stumbled over more suspicious code:

static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
...
	switch (delivery_mode) {
	case APIC_DM_LOWEST:
		vcpu->arch.apic_arb_prio++;

What makes this (remote) increment safe that we can avoid an atomic inc?

> 
>>                                 And it's more consistent to process the
>> events here IMHO.
>>
> I would like to minimize a number of places kvm_apic_accept_events()
> is called, but it looks like we cannot remove it from here indeed. What
> about calling it in "case: KVM_MP_STATE_INIT_RECEIVED"?

Should work, but what is the benefit? I'd prefer to avoid temporary
switching to RUNNABLE, entering vcpu_enter_guest just to find out it was
an INIT_RECEIVED transition. Checking unconditionally makes the control
flow simpler.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux