https://bugzilla.kernel.org/show_bug.cgi?id=216212 --- Comment #1 from Sean Christopherson (seanjc@xxxxxxxxxx) --- On Thu, Jul 07, 2022, bugzilla-daemon@xxxxxxxxxx wrote: > Likely stack trace and cause of this bug (Linux source code version is > 5.18.9): > > Stack trace: > > handle_cr > kvm_set_cr0 > load_pdptrs > kvm_translate_gpa Yeah, load_pdptrs() needs to call kvm_inject_emulated_page_fault() to inject a TDP fault if translating the L2 GPA to an L1 GPA fails. That part is easy to fix, but communicating up the stack that the instruction has already faulted is going to be painful due to the use of kvm_complete_insn_gp(). Ugh, and the emulator gets involved too. Not that it makes things worse than they already are, but I'm pretty sure MOV CR3 (via the emulator) and MOV CR4 are also affected. I suspect the least awful solution will be to use proper error codes instead of 0/1 so that kvm_complete_insn_gp() and friends can differentiate between "success", "injected #GP", and "already exploded", but it's still going to require a lot of churn. A more drastic, but maybe less painful (though as I type this out, it's becoming ridiculously painful) alternative would be to not intercept CR0/CR4 paging bits when running L2 and TDP is enabled, which would in theory allow KVM to drop the call to kvm_translate_gpa(). load_pdptrs() would still be reachable via the emulator, but I think iff the guest is playing TLB, so KVM could probably just resume the guest in that case. The primary reason KVM intercepts CR0/CR4 paging bits even when using TDP is so that KVM doesn't need to refresh state to do software gva->gpa walks, e.g. to correctly emulate memory accesses and reserved PTE bits. The argument for intercepting is that changing paging modes is a rare guest operation, whereas emulating some form of memory access is relatively common. And it's also simpler in the sense that KVM can use common code for TDP and !TDP (shadow paging heavily depends on caching paging state). But emulating on behalf of L2 is quite rare, and having to deal with this bug counters the "it's simpler" argument to some extent. I _think_ ensuring the nested MMU is properly initialized could be solved by adding a nested_gva_to_gpa() wrapper instead of directly wiring mmu->gva_to_gpa() to the correct helper. The messier part would be handling intercepts. VMX would have to adjust vmcs02.CRx_READ_SHADOW and resume the guest to deal with incidental interception, e.g. if the guest toggles both CR0.CD and CR0.PG. SVM is all or nothing for intercepts, but PAE under NPT isn't required to load PDTRs at MOV CR, so we could just drop that entire path for SVM+NPT. But that would rely on KVM correctly handling L1 NPT faults during PDPTE accesses on behalf of L2, which of course KVM doesn't get right. So yeah, maybe KVM can avoid some of the PAE pain in the long term if KVM stops intercepting CR0/CR4 paging bits, but it's probably a bad idea for an immediate fix. -- You may reply to this email to add a comment. You are receiving this mail because: You are watching the assignee of the bug.