On Wed, 18 Aug 2021 20:40:42 +0100, Oliver Upton <oupton@xxxxxxxxxx> wrote: > > Hey Marc, > > On Wed, Aug 18, 2021 at 12:05 PM Raghavendra Rao Ananta > <rananta@xxxxxxxxxx> wrote: > > > > On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@xxxxxxxxxx> wrote: > > > > > > When a mapped level interrupt (a timer, for example) is deactivated > > > by the guest, the corresponding host interrupt is equally deactivated. > > > However, the fate of the pending state still needs to be dealt > > > with in SW. > > > > > > This is specially true when the interrupt was in the active+pending > > > state in the virtual distributor at the point where the guest > > > was entered. On exit, the pending state is potentially stale > > > (the guest may have put the interrupt in a non-pending state). > > > > > > If we don't do anything, the interrupt will be spuriously injected > > > in the guest. Although this shouldn't have any ill effect (spurious > > > interrupts are always possible), we can improve the emulation by > > > detecting the deactivation-while-pending case and resample the > > > interrupt. > > > > > > Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts") > > > Reported-by: Raghavendra Rao Ananta <rananta@xxxxxxxxxx> > > > Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx> > > > Cc: stable@xxxxxxxxxxxxxxx > > > --- > > > arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++------- > > > arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++------- > > > 2 files changed, 36 insertions(+), 14 deletions(-) > > > > > Tested-by: Raghavendra Rao Ananta <rananta@xxxxxxxxxx> > > > > Thanks, > > Raghavendra > > > diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c > > > index 2c580204f1dc..3e52ea86a87f 100644 > > > --- a/arch/arm64/kvm/vgic/vgic-v2.c > > > +++ b/arch/arm64/kvm/vgic/vgic-v2.c > > > @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) > > > u32 val = cpuif->vgic_lr[lr]; > > > u32 cpuid, intid = val & GICH_LR_VIRTUALID; > > > struct vgic_irq *irq; > > > + bool deactivated; > > > > > > /* Extract the source vCPU id from the LR */ > > > cpuid = val & GICH_LR_PHYSID_CPUID; > > > @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) > > > > > > raw_spin_lock(&irq->irq_lock); > > > > > > - /* Always preserve the active bit */ > > > + /* Always preserve the active bit, note deactivation */ > > > + deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT); > > > irq->active = !!(val & GICH_LR_ACTIVE_BIT); > > > > > > if (irq->active && vgic_irq_is_sgi(intid)) > > > @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) > > > * device state could have changed or we simply need to > > > * process the still pending interrupt later. > > > * > > > + * We could also have entered the guest with the interrupt > > > + * active+pending. On the next exit, we need to re-evaluate > > > + * the pending state, as it could otherwise result in a > > > + * spurious interrupt by injecting a now potentially stale > > > + * pending state. > > > + * > > > * If this causes us to lower the level, we have to also clear > > > * the physical active state, since we will otherwise never be > > > * told when the interrupt becomes asserted again. > > > @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) > > > if (vgic_irq_is_mapped_level(irq)) { > > > bool resample = false; > > > > > > - if (val & GICH_LR_PENDING_BIT) { > > > - irq->line_level = vgic_get_phys_line_level(irq); > > > - resample = !irq->line_level; > > > - } else if (vgic_irq_needs_resampling(irq) && > > > - !(irq->active || irq->pending_latch)) { > > > - resample = true; > > > + if (unlikely(vgic_irq_needs_resampling(irq))) { > > > + if (!(irq->active || irq->pending_latch)) > > > + resample = true; > > > + } else { > > > + if ((val & GICH_LR_PENDING_BIT) || > > > + (deactivated && irq->line_level)) { > > > + irq->line_level = vgic_get_phys_line_level(irq); > > > + resample = !irq->line_level; > > > + } > > > } > > > > > > if (resample) > > > diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c > > > index 66004f61cd83..74f9aefffd5e 100644 > > > --- a/arch/arm64/kvm/vgic/vgic-v3.c > > > +++ b/arch/arm64/kvm/vgic/vgic-v3.c > > > @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) > > > u32 intid, cpuid; > > > struct vgic_irq *irq; > > > bool is_v2_sgi = false; > > > + bool deactivated; > > > > > > cpuid = val & GICH_LR_PHYSID_CPUID; > > > cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT; > > > @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) > > > > > > raw_spin_lock(&irq->irq_lock); > > > > > > - /* Always preserve the active bit */ > > > + /* Always preserve the active bit, note deactivation */ > > > + deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT); > > > irq->active = !!(val & ICH_LR_ACTIVE_BIT); > > > > > > if (irq->active && is_v2_sgi) > > > @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) > > > * device state could have changed or we simply need to > > > * process the still pending interrupt later. > > > * > > > + * We could also have entered the guest with the interrupt > > > + * active+pending. On the next exit, we need to re-evaluate > > > + * the pending state, as it could otherwise result in a > > > + * spurious interrupt by injecting a now potentially stale > > > + * pending state. > > > + * > > > * If this causes us to lower the level, we have to also clear > > > * the physical active state, since we will otherwise never be > > > * told when the interrupt becomes asserted again. > > > @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) > > > if (vgic_irq_is_mapped_level(irq)) { > > > bool resample = false; > > > > > > - if (val & ICH_LR_PENDING_BIT) { > > > - irq->line_level = vgic_get_phys_line_level(irq); > > > - resample = !irq->line_level; > > > - } else if (vgic_irq_needs_resampling(irq) && > > > - !(irq->active || irq->pending_latch)) { > > > - resample = true; > > > + if (unlikely(vgic_irq_needs_resampling(irq))) { > > > + if (!(irq->active || irq->pending_latch)) > > > + resample = true; > > > + } else { > > > + if ((val & ICH_LR_PENDING_BIT) || > > > + (deactivated && irq->line_level)) { > > > + irq->line_level = vgic_get_phys_line_level(irq); > > > + resample = !irq->line_level; > > > + } > > The vGICv3 and vGICv2 implementations look identical here, should we > have a helper that keeps the code common between the two? Probably. This code used to be much simpler, but it has grown a bit unwieldy since I added the M1 support hack. This change doesn't make look any better so it is probably time for a minor refactor. I've pushed out an updated patch, but I'll wait a bit more for additional feedback before posting it again. > > Otherwise, the functional change LGTM, so: > > Reviewed-by: Oliver Upton <oupton@xxxxxxxxxx> Thanks, M. -- Without deviation from the norm, progress is not possible.