On Tue, 22 Jun 2021 17:03:22 +0100, Alexandru Elisei <alexandru.elisei@xxxxxxx> wrote: > > Hi Marc, > > On 6/22/21 4:51 PM, Marc Zyngier wrote: > > Hi Alex, > > > > On Tue, 22 Jun 2021 16:39:11 +0100, > > Alexandru Elisei <alexandru.elisei@xxxxxxx> wrote: > >> Hi Marc, > >> > >> On 6/1/21 11:39 AM, Marc Zyngier wrote: > >>> This is a new version of the series previously posted at [3], reworking > >>> the vGIC and timer code to cope with the M1 braindead^Wamusing nature. > >>> > >>> Hardly any change this time around, mostly rebased on top of upstream > >>> now that the dependencies have made it in. > >>> > >>> Tested with multiple concurrent VMs running from an initramfs. > >>> > >>> Until someone shouts loudly now, I'll take this into 5.14 (and in > >>> -next from tomorrow). > >> I am not familiar with irqdomains or with the irqchip > >> infrastructure, so I can't really comment on patch #8. > >> > >> I tried testing this with a GICv3 by modifying the driver to set > >> no_hw_deactivation and no_maint_irq_mask: > >> > >> diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c > >> index 340c51d87677..d0c6f808d7f4 100644 > >> --- a/arch/arm64/kvm/vgic/vgic-init.c > >> +++ b/arch/arm64/kvm/vgic/vgic-init.c > >> @@ -565,8 +565,10 @@ int kvm_vgic_hyp_init(void) > >> if (ret) > >> return ret; > >> > >> + /* > >> if (!has_mask) > >> return 0; > >> + */ > >> > >> ret = request_percpu_irq(kvm_vgic_global_state.maint_irq, > >> vgic_maintenance_handler, > >> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c > >> index 453fc425eede..9ce4dee20655 100644 > >> --- a/drivers/irqchip/irq-gic-v3.c > >> +++ b/drivers/irqchip/irq-gic-v3.c > >> @@ -1850,6 +1850,12 @@ static void __init gic_of_setup_kvm_info(struct device_node > >> *node) > >> if (!ret) > >> gic_v3_kvm_info.vcpu = r; > >> > >> + gic_v3_kvm_info.no_hw_deactivation = true; > > Blink... > > > >> + gic_v3_kvm_info.no_maint_irq_mask = true; > >> + > >> + vgic_set_kvm_info(&gic_v3_kvm_info); > >> + return; > >> + > >> gic_v3_kvm_info.has_v4 = gic_data.rdists.has_vlpis; > >> gic_v3_kvm_info.has_v4_1 = gic_data.rdists.has_rvpeid; > >> vgic_set_kvm_info(&gic_v3_kvm_info); > >> > >> Kept the maintenance irq ID so the IRQ gets enabled at the > >> Redistributor level. I don't know if I managed to break something > >> with those changes, but when testing on the model and on a rockpro64 > >> (with the patches cherry-picked on top of v5.13-rc7) I kept seeing > >> rcu stalls. I assume I did something wrong. > > If you do that, the interrupts that are forwarded to the guest > > (timers) will never be deactivated, and will be left dangling after > > the first injection. This is bound to create havoc, as we will then > > use mask/unmask to control the timer delivery (remember that the > > Active state is just another form of auto-masking on top of the > > standard enable bit) > > > > On the contrary, the AIC only has a single bit to control the timer > > (used as a mask), which is what the irqdomain stuff implements to > > mimic the active state. > > So these patches work **only** with the AIC, not with a standard > GICv3 without the HW bit in the LR registers and with an unmaskable > maintenance IRQ? Because from the commit message from #8 I got the > impression that the purpose of the change is to make timers work on > a standard GICv3, sans those required architectural features. I don't understand what you mean. The HW bit in the LR and deactivation *are* required, non-negotiable parts of the GICv3 architecture. Apple did not implement it is a consequence of the AIC not having an active state that the guest can manipulate independently of the host. Either you have both HW bit and active state, and both work together (normal GICv3), or you have none of that and we rely on the maintenance interrupt to exit and fix the mess (Apple crap). You cannot have an intermediate state between the two. Thanks, M. -- Without deviation from the norm, progress is not possible.