On 09/07/20 23:12, Sean Christopherson wrote: >> It's bad that we have no clue what's causing the bad behavior, but I >> don't think it's wise to have a bug that is known to happen when you >> enable the capability. :/ (Note that this wasn't a NACK, though subtly so). > I don't necessarily disagree, but at the same time it's entirely possible > it's a Qemu bug. No, it cannot be. QEMU is not doing anything but KVM_SET_USER_MEMORY_REGION, and it's doing that synchronously with writes to the PCI configuration space BARs. > Even if this is a kernel bug, I'm fairly confident at this point that it's > not a KVM bug. Or rather, if it's a KVM "bug", then there's a fundamental > dependency in memslot management that needs to be rooted out and documented. Heh, here my surmise is that it cannot be anything but a KVM bug, because Memslots are not used by anything outside KVM... But maybe I'm missing something. > And we're kind of in a catch-22; it'll be extremely difficult to narrow down > exactly who is breaking what without being able to easily test the optimized > zapping with other VMMs and/or setups. I agree with this, and we could have a config symbol that depends on BROKEN and enables it unconditionally. However a capability is the wrong tool. Paolo