On Wed, 2024-04-10 at 15:07 -0700, isaku.yamahata@xxxxxxxxx wrote: > From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > > Wire KVM_MAP_MEMORY ioctl to kvm_mmu_map_tdp_page() to populate guest > memory. When KVM_CREATE_VCPU creates vCPU, it initializes the x86 > KVM MMU part by kvm_mmu_create() and kvm_init_mmu(). vCPU is ready to > invoke the KVM page fault handler. > > Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > --- > v2: > - Catch up the change of struct kvm_memory_mapping. (Sean) > - Removed mapping level check. Push it down into vendor code. (David, Sean) > - Rename goal_level to level. (Sean) > - Drop kvm_arch_pre_vcpu_map_memory(), directly call kvm_mmu_reload(). > (David, Sean) > - Fixed the update of mapping. > --- > arch/x86/kvm/x86.c | 30 ++++++++++++++++++++++++++++++ > 1 file changed, 30 insertions(+) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 2d2619d3eee4..2c765de3531e 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -4713,6 +4713,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long > ext) > case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: > case KVM_CAP_IRQFD_RESAMPLE: > case KVM_CAP_MEMORY_FAULT_INFO: > + case KVM_CAP_MAP_MEMORY: > r = 1; > break; Should we add this after all of the pieces are in place? > case KVM_CAP_EXIT_HYPERCALL: > @@ -5867,6 +5868,35 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu > *vcpu, > } > } > > +int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, > + struct kvm_memory_mapping *mapping) > +{ > + u64 end, error_code = 0; > + u8 level = PG_LEVEL_4K; > + int r; > + > + /* > + * Shadow paging uses GVA for kvm page fault. The first > implementation > + * supports GPA only to avoid confusion. > + */ > + if (!tdp_enabled) > + return -EOPNOTSUPP; It's not confusion, it's that you can't pre-map GPAs for legacy shadow paging. Or you are saying why not to support pre-mapping GVAs? I think that discussion belongs more in the commit log. The code should just say it's not possible to pre-map GPAs in shadow paging. > + > + /* reload is optimized for repeated call. */ > + kvm_mmu_reload(vcpu); > + > + r = kvm_tdp_map_page(vcpu, mapping->base_address, error_code, &level); > + if (r) > + return r; > + > + /* mapping->base_address is not necessarily aligned to level-hugepage. > */ > + end = (mapping->base_address & KVM_HPAGE_MASK(level)) + > + KVM_HPAGE_SIZE(level); > + mapping->size -= end - mapping->base_address; > + mapping->base_address = end; > + return r; > +} > + > long kvm_arch_vcpu_ioctl(struct file *filp, > unsigned int ioctl, unsigned long arg) > {