Re: [PATCH] ARM: KVM: make vgic_inject_irq() compute only one IRQ

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 10, 2012 at 4:33 PM, Marc Zyngier <marc.zyngier@xxxxxxx> wrote:
> On Mon, 10 Sep 2012 16:23:15 -0400, Christoffer Dall
> <c.dall@xxxxxxxxxxxxxxxxxxxxxx> wrote:
>> On Wed, Sep 5, 2012 at 1:02 PM, Marc Zyngier <marc.zyngier@xxxxxxx>
> wrote:
>>> Recomputing the whole GIC state when injecting a single interrupt
>>> is a tiny bit silly.
>>>
>>> Instead, just computing the new state for this particular IRQ saves
>>> a bit of processing and makes things noticeably faster.
>>
>> noticeably how?
>
> By being able to inject about 10% more interrupts.
>
>>
>>>
>>> Signed-off-by: Marc Zyngier <marc.zyngier@xxxxxxx>
>>> ---
>>>  arch/arm/include/asm/kvm_vgic.h |  1 +
>>>  arch/arm/kvm/vgic.c             | 69
>>>  ++++++++++++++++++++++++++++++-----------
>>>  2 files changed, 52 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_vgic.h
>>> b/arch/arm/include/asm/kvm_vgic.h
>>> index 633e697..1f7fd32 100644
>>> --- a/arch/arm/include/asm/kvm_vgic.h
>>> +++ b/arch/arm/include/asm/kvm_vgic.h
>>> @@ -183,6 +183,7 @@ struct vgic_dist {
>>>         u8                      irq_sgi_sources[VGIC_MAX_CPUS][16];
>>>
>>>         /* Target CPU for each IRQ */
>>> +       u8                      irq_spi_cpu[VGIC_NR_SHARED_IRQS];
>>>         struct vgic_bitmap      irq_spi_target[VGIC_MAX_CPUS];
>>>
>>>         /* Bitmap indicating which CPU has something pending */
>>> diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
>>> index 5177792..5f32772 100644
>>> --- a/arch/arm/kvm/vgic.c
>>> +++ b/arch/arm/kvm/vgic.c
>>> @@ -47,6 +47,9 @@
>>>   *     registers, stored on each vcpu. We only keep one bit of
>>>   *     information per interrupt, making sure that only one vcpu can
>>>   *     accept the interrupt.
>>> + *   The same is true when injecting an interrupt, except that we only
>>> + *   consider a single interrupt at a time. The irq_spi_cpu array
>>> + *   contains the target CPU for each SPI.
>>>   *
>>>   * The handling of level interrupts adds some extra complexity. We
>>>   * need to track when the interrupt has been EOIed, so we can sample
>>> @@ -313,6 +316,7 @@ static void vgic_set_target_reg(struct kvm *kvm,
> u32
>>> val, int irq)
>>>                 int shift = i * 8;
>>>                 target = ffs((val >> shift) & 0xffU);
>>>                 target = target ? (target - 1) : 0;
>>> +               dist->irq_spi_cpu[irq] = target;
>>>                 kvm_for_each_vcpu(c, vcpu, kvm) {
>>>                         bmap =
>>>
> vgic_bitmap_get_shared_map(&dist->irq_spi_target[c]);
>>>                         if (c == target)
>>> @@ -866,23 +870,19 @@ static void vgic_kick_vcpus(struct kvm *kvm)
>>>         }
>>>  }
>>>
>>> -int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, const struct
>>> kvm_irq_level *irq)
>>> +static bool vgic_update_irq_state(struct kvm *kvm, int cpuid,
>>> +                                 const struct kvm_irq_level *irq)
>>>  {
>>>         struct vgic_dist *dist = &kvm->arch.vgic;
>>> -       int nrcpus = atomic_read(&kvm->online_vcpus);
>>> -       int is_edge, state;
>>> +       struct kvm_vcpu *vcpu;
>>> +       int is_edge, state, level;
>>> +       int pend, enabled;
>>>         unsigned long flags;
>>> -       bool updated_state = false;
>>>
>>> -       if (cpuid >= nrcpus)
>>> -               return -EINVAL;
>>> -
>>> -       /* Only PPIs or SPIs */
>>> -       if (irq->irq >= VGIC_NR_IRQS || irq->irq < 16)
>>> -               return -EINVAL;
>>> +       level = !!irq->level;
>>>
>>> -       kvm_debug("Inject IRQ%d\n", irq->irq);
>>>         spin_lock_irqsave(&dist->lock, flags);
>>> +
>>>         is_edge = vgic_irq_is_edge(dist, irq->irq);
>>>         state = vgic_bitmap_get_irq_val(&dist->irq_state, cpuid,
>>>         irq->irq);
>>>
>>> @@ -891,16 +891,49 @@ int kvm_vgic_inject_irq(struct kvm *kvm, int
>>> cpuid, const struct kvm_irq_level *
>>>          * - level triggered and we change level
>>>          * - edge triggered and we have a rising edge
>>>          */
>>> -       if ((!is_edge && (state ^ !!irq->level)) ||
>>> -           (is_edge && !state && irq->level)) {
>>> -               vgic_bitmap_set_irq_val(&dist->irq_state, cpuid,
>>> -                                       irq->irq, !!irq->level);
>>> -               vgic_update_state(kvm);
>>> -               updated_state = true;
>>> +       if (!((!is_edge && (state ^ level)) ||
>>> +             (is_edge && !state && level))) {
>>> +               spin_unlock_irqrestore(&dist->lock, flags);
>>> +               return false;
>>> +       }
>>
>> wow, this one really hurts now, I'll probably have a go with my good
>> old friend De Morgan here and try to get rid of the double negation
>> and fit the comment a little better.
>
> I thought of that too but gave up, more pressing issues to solve.
>

it doesn't take very long, I would declare is_edge as a bool and do an
is_level, like so:

bool is_edge, is_level;
...
is_edge = ...;
is_level = !is_edge;

	/*
	 * Only inject an interrupt if either:
	 * - level triggered and we change level, or
	 * - edge triggered and we have a rising edge
	 */
	if (is_level && !(state ^ level) ||
	    is_edge && (state || !level)) {
		vgic_bitmap_set_irq_val(&dist->irq_state, cpuid,
					irq_num, level);
		spin_unlock_irqrestore(&dist->lock, flags);
		return false;
	}


>>
>>> +
>>> +       vgic_bitmap_set_irq_val(&dist->irq_state, cpuid, irq->irq,
>>> level);
>>> +
>>> +       enabled = vgic_bitmap_get_irq_val(&dist->irq_enabled, cpuid,
>>> irq->irq);
>>> +       pend = level && enabled;
>>> +
>>> +       if (irq->irq >= 32) {
>>> +               cpuid = dist->irq_spi_cpu[irq->irq - 32];
>>> +               pend &=
>>> vgic_bitmap_get_irq_val(&dist->irq_spi_target[cpuid],
>>> +                                               0, irq->irq);
>>>         }
>>> +
>>> +       kvm_debug("Inject IRQ%d level %d CPU%d\n", irq->irq, level,
>>> cpuid);
>>> +
>>> +       vcpu = kvm_get_vcpu(kvm, cpuid);
>>> +       if (pend) {
>>> +               set_bit(irq->irq, vcpu->arch.vgic_cpu.pending);
>>> +               set_bit(cpuid, &dist->irq_pending_on_cpu);
>>> +       } else
>>> +               clear_bit(irq->irq, vcpu->arch.vgic_cpu.pending);
>>> +
>>>         spin_unlock_irqrestore(&dist->lock, flags);
>>>
>>> -       if (updated_state)
>>> +       return true;
>>> +}
>>> +
>>> +int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, const struct
>>> kvm_irq_level *irq)
>>> +{
>>> +       int nrcpus = atomic_read(&kvm->online_vcpus);
>>> +
>>> +       if (cpuid >= nrcpus)
>>> +               return -EINVAL;
>>> +
>>> +       /* Only PPIs or SPIs */
>>> +       if (irq->irq >= VGIC_NR_IRQS || irq->irq < 16)
>>> +               return -EINVAL;
>>> +
>>> +       if (vgic_update_irq_state(kvm, cpuid, irq))
>>>                 vgic_kick_vcpus(kvm);
>>>
>>>         return 0;
>>> --
>>> 1.7.12
>>>
>>
>> this looks reasonable, I'll apply it on top of the irq rework and
>> change it so it fits.
>>
>> I assume you've tested this as usual?
>
> Yes, but I now have an updated version (this one breaks interrupt
> migration in a not-so-subtle way). I also have another couple of VGIC
> related patches that relax the locking and make things look really nice in
> perf.
>

ugh, doesn't sound good. Details?

> I'll post all of this tomorrow, based on your branch (I was in the process
> of rebasing my stuff when I started investigating the swap madness).
>

ok, I'll hold off with more stuff then until you send out new patches.

-Christoffer
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm


[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux