On Wed, Jan 30, 2019 at 08:01:22AM +0100, Cédric Le Goater wrote: > On 1/30/19 5:29 AM, Paul Mackerras wrote: > > On Mon, Jan 28, 2019 at 06:35:34PM +0100, Cédric Le Goater wrote: > >> On 1/22/19 6:05 AM, Paul Mackerras wrote: > >>> On Mon, Jan 07, 2019 at 07:43:17PM +0100, Cédric Le Goater wrote: > >>>> This is the basic framework for the new KVM device supporting the XIVE > >>>> native exploitation mode. The user interface exposes a new capability > >>>> and a new KVM device to be used by QEMU. > >>> > >>> [snip] > >>>> @@ -1039,7 +1039,10 @@ static int kvmppc_book3s_init(void) > >>>> #ifdef CONFIG_KVM_XIVE > >>>> if (xive_enabled()) { > >>>> kvmppc_xive_init_module(); > >>>> + kvmppc_xive_native_init_module(); > >>>> kvm_register_device_ops(&kvm_xive_ops, KVM_DEV_TYPE_XICS); > >>>> + kvm_register_device_ops(&kvm_xive_native_ops, > >>>> + KVM_DEV_TYPE_XIVE); > >>> > >>> I think we want tighter conditions on initializing the xive_native > >>> stuff and creating the xive device class. We could have > >>> xive_enabled() returning true in a guest, and this code will get > >>> called both by PR KVM and HV KVM (and HV KVM no longer implies that we > >>> are running bare metal). > >> > >> So yes, I gave nested a try with kernel_irqchip=on and the nested hypervisor > >> (L1) obviously crashes trying to call OPAL. I have tighten the test with : > >> > >> if (xive_enabled() && !kvmhv_on_pseries()) { > >> > >> for now. > >> > >> As this is a problem today in 5.0.x, I will send a patch for it if you think > > > > How do you mean this is a problem today in 5.0? I just tried 5.0-rc1 > > with kernel_irqchip=on in a nested guest and it works just fine. What > > exactly did you test? > > L0: Linux 5.0.0-rc3 (+ KVM HV) > L1: QEMU pseries-4.0 (kernel_irqchip=on) - Linux 5.0.0-rc3 (+ KVM HV) > L2: QEMU pseries-4.0 (kernel_irqchip=on) - Linux 5.0.0-rc3 > > L1 crashes when L2 starts and tries to initialize the KVM IRQ device as > it does an OPAL call and its running under SLOF. See below. OK, you must have a QEMU that advertises XIVE to the guest (L1). In that case I can see that L1 would try to do XICS-on-XIVE, which won't work. We need to fix that. Unfortunately the XICS-on-XICS emulation won't work as is in L1 either, but I think we can fix that by disabling the real-mode XICS hcall handling. > I don't understand how L2 can work with kernel_irqchip=on. Could you > please explain ? If QEMU decides to advertise XIVE to the L2 guest and the L2 guest can do XIVE, then the only possibility is to use the XIVE software emulation in QEMU, and if kernel_irqchip=on has been specified explicitly, maybe QEMU decides to terminate the guest rather than implicitly turning off kernel_irqchip. If QEMU decides not to advertise XIVE to the L2 guest, or the L2 guest can't do XIVE, then we could use the XICS-on-XICS emulation in L1 as long as either (a) L1 is not using XIVE, or (b) we modify the XICS-on-XICS code to avoid using any XICS or XIVE access (i.e. just using calls to generic kernel facilities). Ultimately, if the spapr xive backend code in the kernel could be extended to provide all the low-level functions that the XICS-on-XIVE code needs, then we could do XICS-on-XIVE in a guest. Paul.