On Thu, Jun 15, 2023, Robert Hoo wrote: > On 6/3/2023 12:19 AM, Anish Moorthy wrote: > > Implement KVM_CAP_MEMORY_FAULT_INFO for efaults generated by > > kvm_handle_error_pfn(). > > > > Signed-off-by: Anish Moorthy <amoorthy@xxxxxxxxxx> > > --- > > arch/x86/kvm/mmu/mmu.c | 13 +++++++++++++ > > 1 file changed, 13 insertions(+) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index c8961f45e3b1..cb71aae9aaec 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -3291,6 +3291,10 @@ static void kvm_send_hwpoison_signal(struct kvm_memory_slot *slot, gfn_t gfn) > > static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > { > > + uint64_t rounded_gfn; > > + uint64_t fault_size; > > + uint64_t fault_flags; > > + > > if (is_sigpending_pfn(fault->pfn)) { > > kvm_handle_signal_exit(vcpu); > > return -EINTR; > > @@ -3309,6 +3313,15 @@ static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fa > > return RET_PF_RETRY; > > } > > + fault_size = KVM_HPAGE_SIZE(fault->goal_level); > > IIUC, here fault->goal_level is always PG_LEVEL_4K. > goal_level could be adjusted in later kvm_tdp_mmu_map() --> > kvm_mmu_hugepage_adjust(), if kvm_faultin_pfn() doesn't fail, that is to > say, code path doesn't go through here. > > I wonder, if you would like put (kind of) kvm_mmu_hugepage_adjust() here as > well, reporting to user space the maximum map size it could do with, OR, > just report 4K size, let user space itself to detect/decide max possible > size (but now I've no idea how to). No, that's nonsensical because KVM uses the host mapping to compute the max mapping level. If there's no valid mapping, then there's no defined level. And as I said in my reply, KVM should never kick out to userspace if KVM can establish a 4KiB mapping, i.e. 4KiB is always the effective scope, and reporting anything else would just be wild speculation.