Implement KVM_CAP_MEMORY_FAULT_INFO for efaults generated by direct_map(). Since direct_map() traverses multiple levels of the shadow page table, it seems like there are actually two correct guest physical address ranges which could be provided. 1. A smaller range, more specific range, which potentially only corresponds to a part of what could not be mapped. start = gfn_round_for_level(fault->gfn, fault->goal_level) length = KVM_PAGES_PER_HPAGE(fault->goal_level) 2. The entire range which could not be mapped start = gfn_round_for_level(fault->gfn, fault->goal_level) length = KVM_PAGES_PER_HPAGE(fault->goal_level) Take the first approach, although it's possible the second is actually preferable. Signed-off-by: Anish Moorthy <amoorthy@xxxxxxxxxx> --- arch/x86/kvm/mmu/mmu.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 937329bee654e..a965c048edde8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3192,8 +3192,13 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) fault->req_level >= it.level); } - if (WARN_ON_ONCE(it.level != fault->goal_level)) + if (WARN_ON_ONCE(it.level != fault->goal_level)) { + gfn_t rounded_gfn = gfn_round_for_level(fault->gfn, fault->goal_level); + uint64_t len = KVM_PAGES_PER_HPAGE(fault->goal_level); + + kvm_populate_efault_info(vcpu, rounded_gfn, len); return -EFAULT; + } ret = mmu_set_spte(vcpu, fault->slot, it.sptep, ACC_ALL, base_gfn, fault->pfn, fault); -- 2.40.0.577.gac1e443424-goog