On Wed, 24 Jan 2024 20:48:58 +0000, Oliver Upton <oliver.upton@xxxxxxxxx> wrote: > > Start iterating the LPI xarray in anticipation of removing the LPI > linked-list. > > Signed-off-by: Oliver Upton <oliver.upton@xxxxxxxxx> > --- > arch/arm64/kvm/vgic/vgic-its.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c > index f152d670113f..a2d95a279798 100644 > --- a/arch/arm64/kvm/vgic/vgic-its.c > +++ b/arch/arm64/kvm/vgic/vgic-its.c > @@ -332,6 +332,7 @@ static int update_lpi_config(struct kvm *kvm, struct vgic_irq *irq, > int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) > { > struct vgic_dist *dist = &kvm->arch.vgic; > + XA_STATE(xas, &dist->lpi_xa, 0); Why 0? LPIs start at 8192 (aka GIC_LPI_OFFSET), so it'd probably make sense to use that. > struct vgic_irq *irq; > unsigned long flags; > u32 *intids; > @@ -350,7 +351,9 @@ int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) > return -ENOMEM; > > raw_spin_lock_irqsave(&dist->lpi_list_lock, flags); > - list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) { > + rcu_read_lock(); > + > + xas_for_each(&xas, irq, U32_MAX) { Similar thing: we advertise 16 bits of ID space (described as INTERRUPT_ID_BITS_ITS), so capping at that level would make it more understandable. > if (i == irq_count) > break; > /* We don't need to "get" the IRQ, as we hold the list lock. */ > @@ -358,6 +361,8 @@ int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) > continue; > intids[i++] = irq->intid; > } > + > + rcu_read_unlock(); > raw_spin_unlock_irqrestore(&dist->lpi_list_lock, flags); > > *intid_ptr = intids; Thanks, M. -- Without deviation from the norm, progress is not possible.