On Wed, Apr 3, 2024 at 12:58 AM Isaku Yamahata <isaku.yamahata@xxxxxxxxx> wrote: > I think TDX can use it with slight change. Pass vcpu instead of KVM, page pin > down and mmu_lock. TDX requires non-leaf Secure page tables to be populated > before adding a leaf. Maybe with the assumption that vcpu doesn't run, GFN->PFN > relation is stable so that mmu_lock isn't needed? What about punch hole? > > The flow would be something like as follows. > > - lock slots_lock > > - kvm_gmem_populate(vcpu) > - pin down source page instead of do_memcopy. Both pinning the source page and the memcpy can be done in the callback. I think the right thing to do is: 1) eliminate do_memcpy, letting AMD code taking care of copy_from_user. 2) pass to the callback only gfn/pfn/src, where src is computed as args->src ? args->src + i * PAGE_SIZE : NULL If another architecture/vendor needs do_memcpy, they can add something like kvm_gmem_populate_copy. > - get pfn with __kvm_gmem_get_pfn() > - read lock mmu_lock > - in the post_populate callback > - lookup tdp mmu page table to check if the table is populated. > lookup only version of kvm_tdp_mmu_map(). > We need vcpu instead of kvm. Passing vcpu can be done using the opaque callback argument to kvm_gmem_populate. Likewise, the mmu_lock can be taken by the TDX post_populate callback. Paolo > - TDH_MEM_PAGE_ADD > - read unlock mmu_lock > > - unlock slots_lock > > Thanks, > > > With that model, the potential for using kvm_gmem_populate() seemed > > plausible to I was trying to make it immediately usable for that > > purpose. But maybe the TDX folks can confirm whether this would be > > usable for them or not. (kvm_gmem_populate was introduced here[2] for > > reference/background) > > > > -Mike > > > > [1] https://lore.kernel.org/kvm/20240319155349.GE1645738@xxxxxxxxxxxxxxxxxxxxx/T/#m8580d8e39476be565534d6ff5f5afa295fe8d4f7 > > [2] https://lore.kernel.org/kvm/20240329212444.395559-3-michael.roth@xxxxxxx/T/#m3aeba660fcc991602820d3703b1265722b871025)