On 2020-11-24 08:10, Shenming Lu wrote:
On 2020/11/23 17:27, Marc Zyngier wrote:
On 2020-11-23 06:54, Shenming Lu wrote:
From: Zenghui Yu <yuzenghui@xxxxxxxxxx>
When setting the forwarding path of a VLPI, it is more consistent to
I'm not sure it is more consistent. It is a *new* behaviour, because
it only
matters for migration, which has been so far unsupported.
Alright, consistent may not be accurate...
But I have doubt that whether there is really no need to transfer the
pending states
from kvm'vgic to VPT in set_forwarding regardless of migration, and the
similar
for unset_forwarding.
If you have to transfer that state outside of the a save/restore, it
means that
you have missed the programming of the PCI endpoint. This is an
established
restriction that the MSI programming must occur *after* the translation
has
been established using MAPI/MAPTI (see the large comment at the
beginning of
vgic-v4.c).
If you want to revisit this, fair enough. But you will need a lot more
than
just opportunistically transfer the pending state.
also transfer the pending state from irq->pending_latch to VPT
(especially
in migration, the pending states of VLPIs are restored into kvm’s
vgic
first). And we currently send "INT+VSYNC" to trigger a VLPI to
pending.
Signed-off-by: Zenghui Yu <yuzenghui@xxxxxxxxxx>
Signed-off-by: Shenming Lu <lushenming@xxxxxxxxxx>
---
arch/arm64/kvm/vgic/vgic-v4.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm64/kvm/vgic/vgic-v4.c
b/arch/arm64/kvm/vgic/vgic-v4.c
index b5fa73c9fd35..cc3ab9cea182 100644
--- a/arch/arm64/kvm/vgic/vgic-v4.c
+++ b/arch/arm64/kvm/vgic/vgic-v4.c
@@ -418,6 +418,18 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm,
int virq,
irq->host_irq = virq;
atomic_inc(&map.vpe->vlpi_count);
+ /* Transfer pending state */
+ ret = irq_set_irqchip_state(irq->host_irq,
+ IRQCHIP_STATE_PENDING,
+ irq->pending_latch);
+ WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
+
+ /*
+ * Let it be pruned from ap_list later and don't bother
+ * the List Register.
+ */
+ irq->pending_latch = false;
It occurs to me that calling into irq_set_irqchip_state() for a large
number of interrupts can take a significant amount of time. It is also
odd that you dump the VPT with the VPE unmapped, but rely on the VPE
being mapped for the opposite operation.
Shouldn't these be symmetric, all performed while the VPE is unmapped?
It would also save a lot of ITS traffic.
My thought was to use the existing interface directly without
unmapping...
If you want to unmap the vPE and poke the VPT here, as I said in the
cover
letter, set/unset_forwarding might also be called when all devices are
running
at normal run time, in which case the unmapping of the vPE is not
allowed...
No, I'm suggesting that you don't do anything here, but instead as a
by-product
of restoring the ITS tables. What goes wrong if you use the
KVM_DEV_ARM_ITS_RESTORE_TABLE backend instead?
Another possible solution is to add a new dedicated interface to QEMU
to transfer
these pending states to HW in GIC VM state change handler corresponding
to
save_pending_tables?
Userspace has no way to know we use GICv4, and I intend to keep it
completely out of the loop. The API is already pretty tortuous, and
I really don't want to add any extra complexity to it.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm