Re: [PATCH] KVM: x86: Rework INIT and SIPI handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2013-03-13 13:48, Gleb Natapov wrote:
> On Wed, Mar 13, 2013 at 01:40:33PM +0100, Jan Kiszka wrote:
>> On 2013-03-13 13:29, Gleb Natapov wrote:
>>> On Wed, Mar 13, 2013 at 12:36:58PM +0100, Jan Kiszka wrote:
>>>> On 2013-03-13 12:16, Jan Kiszka wrote:
>>>>>>> @@ -5871,8 +5867,8 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
>>>>>>>                       srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
>>>>>>>                       kvm_vcpu_block(vcpu);
>>>>>>>                       vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
>>>>>>> -                     if (kvm_check_request(KVM_REQ_UNHALT, vcpu))
>>>>>>> -                     {
>>>>>>> +                     if (kvm_check_request(KVM_REQ_UNHALT, vcpu)) {
>>>>>>> +                             kvm_apic_accept_events(vcpu);
>>>>>> I think we can drop this. If INIT happens while vcpu is halted it will
>>>>>> become runnable here and kvm_apic_accept_events() will be called in
>>>>>> vcpu_enter_guest().
>>>>>
>>>>> I'm not that sure, but I will recheck carefully.
>>>>
>>>> Doesn't work: If the state was INIT_RECEIVED, we will not process the
>>>> SIPI but reenter kvm_vcpu_block.
>>> Which raises the question. What if vcpu is in INIT_RECEIVED and it
>>> receives NMI. It will make kvm_arch_vcpu_runnable() return true but code
>>> will get back to kvm_vcpu_block() again.
>>
>> Sounds like we had a bug in this area before. This patch won't improve
>> it yet. We need to "block NMIs" while in wait-for-sipi state.
>>
> The problem we have now is much more serious in fact. Since INIT does
> not reset vcpu even regular interrupt can cause this. I wounder what
> should happen to NMIs that were received while CPU is in INIT state?

I do not find direct references to the "native" wait-for-sipi state, but
while in guest mode, external interrupts and NMIs are "blocked", which
likely means held pending.

We are probably fine by adding a condition to kvm_arch_vcpu_runnable
that pending NMIs are no reason to wake up while in INIT_RECEIVED.

> 
>> BTW, I'v just stumbled over more suspicious code:
>>
>> static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
>> ...
>> 	switch (delivery_mode) {
>> 	case APIC_DM_LOWEST:
>> 		vcpu->arch.apic_arb_prio++;
>>
>> What makes this (remote) increment safe that we can avoid an atomic inc?
>>
> Because we do not really care about result.

You mean we do not care if two events increment it by one or two? I mean
the result is actually evaluated, isn't it?

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux