On Tue, Sep 11, 2012 at 2:50 PM, Marc Zyngier <marc.zyngier@xxxxxxx> wrote: > __kvm_vgic_sync_from_cpu() doesn't really touch the distributor itself, > as the irq_pending_on_cpu access doesn't really need any locking. > > Remove the corresponding spin_lock() access from kvm_vgic_sync_from_cpu(). > > Signed-off-by: Marc Zyngier <marc.zyngier@xxxxxxx> > --- > arch/arm/kvm/vgic.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c > index b77f8bf..9cac7b2 100644 > --- a/arch/arm/kvm/vgic.c > +++ b/arch/arm/kvm/vgic.c > @@ -790,7 +790,9 @@ epilog: > } > > /* > - * Sync back the VGIC state after a guest run. > + * Sync back the VGIC state after a guest run. We do not really touch > + * the distributor here (the irq_pending_on_cpu bit is safe to set), > + * so there is no need for taking its lock. > */ > static void __kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu) > { > @@ -815,8 +817,10 @@ static void __kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu) > /* Check if we still have something up our sleeve... */ > pending = find_first_zero_bit((unsigned long *)vgic_cpu->vgic_elrsr, > vgic_cpu->nr_lr); > - if (pending < vgic_cpu->nr_lr) > + if (pending < vgic_cpu->nr_lr) { > set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu); > + smp_mb(); > + } > } > > void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu) > @@ -833,14 +837,10 @@ void kvm_vgic_sync_to_cpu(struct kvm_vcpu *vcpu) > > void kvm_vgic_sync_from_cpu(struct kvm_vcpu *vcpu) > { > - struct vgic_dist *dist = &vcpu->kvm->arch.vgic; > - > if (!irqchip_in_kernel(vcpu->kvm)) > return; > > - spin_lock(&dist->lock); > __kvm_vgic_sync_from_cpu(vcpu); > - spin_unlock(&dist->lock); > } > > int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu) > -- > 1.7.12 > Thanks, applied _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm