Re: [PATCH v3 5/6] KVM: arm64: vgic-v3: Retire all pending LPIs on vcpu destroy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Zenghui,

On 23/04/2020 12:57, Zenghui Yu wrote:
> On 2020/4/23 19:35, James Morse wrote:
>> On 22/04/2020 17:18, Marc Zyngier wrote:
>>> From: Zenghui Yu <yuzenghui@xxxxxxxxxx>
>>>
>>> It's likely that the vcpu fails to handle all virtual interrupts if
>>> userspace decides to destroy it, leaving the pending ones stay in the
>>> ap_list. If the un-handled one is a LPI, its vgic_irq structure will
>>> be eventually leaked because of an extra refcount increment in
>>> vgic_queue_irq_unlock().
>>
>>> diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
>>> index a963b9d766b73..53ec9b9d9bc43 100644
>>> --- a/virt/kvm/arm/vgic/vgic-init.c
>>> +++ b/virt/kvm/arm/vgic/vgic-init.c
>>> @@ -348,6 +348,12 @@ void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
>>>   {
>>>       struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
>>>   +    /*
>>> +     * Retire all pending LPIs on this vcpu anyway as we're
>>> +     * going to destroy it.
>>> +     */
>>
>> Looking at the other caller, do we need something like:
>> |    if (vgic_cpu->lpis_enabled)
>>
>> ?
> 
> If LPIs are disabled at redistributor level, yes there should be no
> pending LPIs in the ap_list. But I'm not sure how can the following
> use-after-free BUG be triggered.
> 
>>
>>> +    vgic_flush_pending_lpis(vcpu);
>>> +
>>
>> Otherwise, I get this on a gic-v2 machine!:
> 
> I don't have a gic-v2 one and thus can't reproduce it :-(
> 
>> [ 1742.187139] BUG: KASAN: use-after-free in vgic_flush_pending_lpis+0x250/0x2c0
>> [ 1742.194302] Read of size 8 at addr ffff0008e1bf1f28 by task qemu-system-aar/542
>> [ 1742.203140] CPU: 2 PID: 542 Comm: qemu-system-aar Not tainted
>> 5.7.0-rc2-00006-g4fb0f7bb0e27 #2
>> [ 1742.211780] Hardware name: ARM LTD ARM Juno Development Platform/ARM Juno Development
>> Platform, BIOS EDK II Jul 30 2018
>> [ 1742.222596] Call trace:
>> [ 1742.225059]  dump_backtrace+0x0/0x328
>> [ 1742.228738]  show_stack+0x18/0x28
>> [ 1742.232071]  dump_stack+0x134/0x1b0
>> [ 1742.235578]  print_address_description.isra.0+0x6c/0x350
>> [ 1742.240910]  __kasan_report+0x10c/0x180
>> [ 1742.244763]  kasan_report+0x4c/0x68
>> [ 1742.248268]  __asan_report_load8_noabort+0x30/0x48
>> [ 1742.253081]  vgic_flush_pending_lpis+0x250/0x2c0
> 
> Could you please show the result of
> 
> ./scripts/faddr2line vmlinux vgic_flush_pending_lpis+0x250/0x2c0
> 
> on your setup?

vgic_flush_pending_lpis+0x250/0x2c0:
vgic_flush_pending_lpis at arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic.c:159

Which is:
|	list_for_each_entry_safe(irq, tmp, &vgic_cpu->ap_list_head, ap_list) {


I think this confirms Marc's view of the KASAN splat.


>> With that:
>> Reviewed-by: James Morse <james.morse@xxxxxxx>
> 
> Thanks a lot!

Heh, it looks like I had the wrong end of the stick with this... I wrongly assumed calling
this on gicv2 would go using structures that held something else.


Thanks,

James
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm




[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux