On Wed, Dec 14, 2022, Maxim Levitsky wrote: > Hi! > > > Recently I had to debug a case of KVM's hypercall patching failing in a > special case of running qemu under valgrind. > > In nutshell what is happening is that qemu uses 'cpuid' instruction to gather > some info about the host and some of it is passed to the guest cpuid, and > that includes the vendor string. > > Under valgrind it emulates the CPU (aka TCG), so qemu sees virtual cpu, with > virtual cpuid which has hardcoded vendor string the 'GenuineIntel', so when > your run qemu with KVM on AMD host, the guest will see Intel's vendor string > regardless of other '-cpu' settings (even -cpu host) > > This ensures that the guest uses the wrong hypercall instruction (vmcall > instead of vmmcall), and sometimes it will use it after the guest kernel > write protects its memory. This will lead to a failure of the hypercall > patching as the kvm writes to the guest memory as if the instruction wrote to > it, and this checks the permissions in the guest paging. > > So the VMCALL instruction gets totally unexpected #PF. Yep, been there, done that :-) > 1. Now I suggest that when hypercall patching fails, can we do > kvm_vm_bugged() instead of forwarding the hypercall? I know that vmmcall can > be executed from ring 3 as well, so I can limit this to hypercall patching > that happens when guest ring is 0. And L1. But why? It's not a KVM bug per se, it's a known deficiency in KVM's emulator. What to do in response to the failure should be up to userspace. The real "fix" is to disable the quirk in QEMU. > 2. Why can't we just emulate the VMCALL/VMMCALL instruction in this case > instead of patching? Any technical reasons for not doing this? Few guests > use it so the perf impact should be very small. Nested is basically impossible to get right[1][2]. IIRC, calling into kvm_emulate_hypercall() from the emulator also gets messy (I think I tried doing exactly this at some point). [1] https://lore.kernel.org/all/Yjyt7tKSDhW66fnR@xxxxxxxxxx [2] https://lore.kernel.org/all/YEZUhbBtNjWh0Zka@xxxxxxxxxx