Re: [PATCH 3/8] KVM: arm/arm64: vgic-its: Cache successful MSI->LPI translation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/06/2019 09:56, Julien Thierry wrote:
> 
> 
> On 07/06/2019 09:51, Marc Zyngier wrote:
>> On 07/06/2019 09:35, Julien Thierry wrote:
>>> Hi Marc,
>>>
>>> On 06/06/2019 17:54, Marc Zyngier wrote:
>>>> On a successful translation, preserve the parameters in the LPI
>>>> translation cache. Each translation is reusing the last slot
>>>> in the list, naturally evincting the least recently used entry.
>>>>
>>>> Signed-off-by: Marc Zyngier <marc.zyngier@xxxxxxx>
>>>> ---
>>>>  virt/kvm/arm/vgic/vgic-its.c | 41 ++++++++++++++++++++++++++++++++++++
>>>>  1 file changed, 41 insertions(+)
>>>>
>>>> diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c
>>>> index 5758504fd934..bc370b6c5afa 100644
>>>> --- a/virt/kvm/arm/vgic/vgic-its.c
>>>> +++ b/virt/kvm/arm/vgic/vgic-its.c
>>>> @@ -538,6 +538,45 @@ static unsigned long vgic_mmio_read_its_idregs(struct kvm *kvm,
>>>>  	return 0;
>>>>  }
>>>>  
>>>> +static void vgic_its_cache_translation(struct kvm *kvm, struct vgic_its *its,
>>>> +				       u32 devid, u32 eventid,
>>>> +				       struct vgic_irq *irq)
>>>> +{
>>>> +	struct vgic_dist *dist = &kvm->arch.vgic;
>>>> +	struct vgic_translation_cache_entry *cte;
>>>> +	unsigned long flags;
>>>> +
>>>> +	/* Do not cache a directly injected interrupt */
>>>> +	if (irq->hw)
>>>> +		return;
>>>> +
>>>> +	raw_spin_lock_irqsave(&dist->lpi_list_lock, flags);
>>>> +
>>>> +	/* Always reuse the last entry (LRU policy) */
>>>> +	cte = list_last_entry(&dist->lpi_translation_cache,
>>>> +			      typeof(*cte), entry);
>>>> +
>>>> +	/*
>>>> +	 * Caching the translation implies having an extra reference
>>>> +	 * to the interrupt, so drop the potential reference on what
>>>> +	 * was in the cache, and increment it on the new interrupt.
>>>> +	 */
>>>> +	if (cte->irq)
>>>> +		__vgic_put_lpi_locked(kvm, cte->irq);
>>>> +
>>>> +	vgic_get_irq_kref(irq);
>>>
>>> If cte->irq == irq, can we avoid the ref putting and getting and just
>>> move the list entry (and update cte)?
>> But in that case, we should have hit in the cache the first place, no?
>> Or is there a particular race I'm not thinking of just yet?
>>
> 
> Yes, I had not made it far enough in the series to see the cache hits
> and assumed this function would also be used to update the LRU policy.
> 
> You can dismiss this comment, sorry for the noise.

Well, I think you're onto something here. Consider the following
(slightly improbably, but not impossible scenario):

CPU0:                        CPU1:

interrupt arrives,
cache miss

<physical interrupt affinity change>

                             interrupt arrives,
                             cache miss

                             resolve translation,
                             cache allocation
resolve translation,
cache allocation

Oh look, we have the same interrupt in the cache twice. Nothing really
bad should result from that, but that's not really the anticipated
behaviour. Which means the list_last_entry() is not the right thing to
do, and we should lookup this particular interrupt in the cache before
adding it. Probably indicates that a long list is not the best data
structure for a cache (who would have thought?).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux