On Thu, 2025-01-16 at 14:37 -0800, Sean Christopherson wrote: > On Thu, Jan 16, 2025, Kai Huang wrote: > > On Thu, 2025-01-16 at 06:50 -0800, Sean Christopherson wrote: > > > On Thu, Jan 16, 2025, Kai Huang wrote: > > ... > > > > > Looking at the code, it seems KVM only traps EOI for level-triggered interrupt > > > > for in-kernel IOAPIC chip, but IIUC IOAPIC in userspace also needs to be told > > > > upon EOI for level-triggered interrupt. I don't know how does KVM works with > > > > userspace IOAPIC w/o trapping EOI for level-triggered interrupt, but "force > > > > irqchip split for TDX guest" seems not right. > > > > > > Forcing a "split" IRQ chip is correct, in the sense that TDX doesn't support an > > > I/O APIC and the "split" model is the way to concoct such a setup. With a "full" > > > IRQ chip, KVM is responsible for emulating the I/O APIC, which is more or less > > > nonsensical on TDX because it's fully virtual world, i.e. there's no reason to > > > emulate legacy devices that only know how to talk to the I/O APIC (or PIC, etc.). > > > Disallowing an in-kernel I/O APIC is ideal from KVM's perspective, because > > > level-triggered interrupts and thus the I/O APIC as a whole can't be faithfully > > > emulated (see below). > > > > Disabling in-kernel IOAPIC/PIC for TDX guests is fine to me, but I think that, > > "conceptually", having IOAPIC/PIC in userspace doesn't mean disabling IOAPIC, > > because theoretically usrespace IOAPIC still needs to be told about the EOI for > > emulation. I just haven't figured out how does userpsace IOAPIC work with KVM > > in case of "split IRQCHIP" w/o trapping EOI for level-triggered interrupt. :-) > > Userspace I/O APIC _does_ intercept EOI. KVM scans the GSI routes provided by > userspace and intercepts those that are configured to be delivered as level- > triggered interrupts. > Yeah see it now (I believe you mean kvm_scan_ioapic_routes()). Thanks! > Whereas with an in-kernel I/O APIC, KVM scans the GSI > routes *and* the I/O APIC Redirection Table (for interrupts that are routed > through the I/O APIC). Right. But neither of them work with TDX because TDX doesn't support EOI exit. So from the sense that we don't want KVM to support in-kernel IOAPIC for TDX, I agree we can force to use IRQCHIP split. But my point is this doesn't seem to be able to resolve the problem. :-) Btw, IIUC, in case of IRQCHIP split, KVM uses KVM_IRQ_ROUTING_MSI for routes of GSIs. But it seems KVM only allows level-triggered MSI to be signaled (which is a surprising): int kvm_set_msi(struct kvm_kernel_irq_routing_entry *e, struct kvm *kvm, int irq_source_id, int level, bool line_status) { struct kvm_lapic_irq irq; if (kvm_msi_route_invalid(kvm, e)) return -EINVAL; if (!level) return -1; kvm_set_msi_irq(kvm, e, &irq); return kvm_irq_delivery_to_apic(kvm, NULL, &irq, NULL); } > > > If the point is to disable in-kernel IOAPIC/PIC for TDX guests, then I think > > both KVM_IRQCHIP_NONE and KVM_IRQCHIP_SPLIT should be allowed for TDX, but not > > just KVM_IRQCHIP_SPLIT? > > No, because APICv is mandatory for TDX, which rules out KVM_IRQCHIP_NONE. Yeah I missed this obvious thing. > > > > > > > > > > I think the problem is level-triggered interrupt, > > > > > > Yes, because the TDX Module doesn't allow the hypervisor to modify the EOI-bitmap, > > > i.e. all EOIs are accelerated and never trigger exits. > > > > > > > so I think another option is to reject level-triggered interrupt for TDX guest. > > > > > > This is a "don't do that, it will hurt" situation. With a sane VMM, the level-ness > > > of GSIs is controlled by the guest. For GSIs that are routed through the I/O APIC, > > > the level-ness is determined by the corresponding Redirection Table entry. For > > > "GSIs" that are actually MSIs (KVM piggybacks legacy GSI routing to let userspace > > > wire up MSIs), and for direct MSIs injection (KVM_SIGNAL_MSI), the level-ness is > > > dictated by the MSI itself, which again is guest controlled. > > > > > > If the guest induces generation of a level-triggered interrupt, the VMM is left > > > with the choice of dropping the interrupt, sending it as-is, or converting it to > > > an edge-triggered interrupt. Ditto for KVM. All of those options will make the > > > guest unhappy. > > > > > > So while it _might_ make debugging broken guests either, I don't think it's worth > > > the complexity to try and prevent the VMM/guest from sending level-triggered > > > GSI-routed interrupts. > > > > > > > KVM can at least have some chance to print some error message? > > No. A guest can shoot itself any number of ways, and userspace has every > opportunity to log weirdness in this case. Agreed.