Re: [PATCH v12 11/29] KVM: SEV: Add KVM_SEV_SNP_LAUNCH_UPDATE command

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 01, 2024 at 05:22:29PM -0500,
Michael Roth <michael.roth@xxxxxxx> wrote:

> On Sat, Mar 30, 2024 at 09:31:40PM +0100, Paolo Bonzini wrote:
> > On 3/29/24 23:58, Michael Roth wrote:
> 
> Cc'ing some more TDX folks.
> 
> > > +	memslot = gfn_to_memslot(kvm, params.gfn_start);
> > > +	if (!kvm_slot_can_be_private(memslot)) {
> > > +		ret = -EINVAL;
> > > +		goto out;
> > > +	}
> > > +
> > 
> > This can be moved to kvm_gmem_populate.
> 
> That does seem nicer, but I hadn't really seen that pattern for
> kvm_gmem_get_pfn()/etc. so wasn't sure if that was by design or not. I
> suppose in those cases the memslot is already available at the main
> KVM page-fault call-sites so maybe it was just unecessary to do the
> lookup internally there.
> 
> > 
> > > +	populate_args.src = u64_to_user_ptr(params.uaddr);
> > 
> > This is not used if !do_memcpy, and in fact src is redundant with do_memcpy.
> > Overall the arguments can be "kvm, gfn, src, npages, post_populate, opaque"
> > which are relatively few and do not need the struct.
> 
> This was actually a consideration for TDX that was discussed during the
> "Finalizing internal guest_memfd APIs for SNP/TDX" PUCK call. In that
> case, they have a TDH_MEM_PAGE_ADD seamcall that takes @src and encrypts
> it, loads it into the destination page, and then maps it into SecureEPT
> through a single call. So in that particular case, @src would be
> initialized, but the memcpy() would be unecessary.
> 
> It's not actually clear TDX plans to use this interface. In v19 they still
> used a KVM MMU hook (set_private_spte) that gets triggered through a call
> to KVM_MAP_MEMORY->kvm_mmu_map_tdp_page() prior to starting the guest. But
> more recent discussion[1] suggests that KVM_MAP_MEMORY->kvm_mmu_map_tdp_page()
> would now only be used to create upper levels of SecureEPT, and the
> actual mapping/encrypting of the leaf page would be handled by a
> separate TDX-specific interface.

I think TDX can use it with slight change. Pass vcpu instead of KVM, page pin
down and mmu_lock.  TDX requires non-leaf Secure page tables to be populated
before adding a leaf.  Maybe with the assumption that vcpu doesn't run, GFN->PFN
relation is stable so that mmu_lock isn't needed? What about punch hole?

The flow would be something like as follows.

- lock slots_lock

- kvm_gmem_populate(vcpu)
  - pin down source page instead of do_memcopy.
  - get pfn with __kvm_gmem_get_pfn()

  - read lock mmu_lock
  - in the post_populate callback
    - lookup tdp mmu page table to check if the table is populated.
      lookup only version of kvm_tdp_mmu_map().
      We need vcpu instead of kvm.
    - TDH_MEM_PAGE_ADD
  - read unlock mmu_lock

- unlock slots_lock

Thanks,

> With that model, the potential for using kvm_gmem_populate() seemed
> plausible to I was trying to make it immediately usable for that
> purpose. But maybe the TDX folks can confirm whether this would be
> usable for them or not. (kvm_gmem_populate was introduced here[2] for
> reference/background)
> 
> -Mike
> 
> [1] https://lore.kernel.org/kvm/20240319155349.GE1645738@xxxxxxxxxxxxxxxxxxxxx/T/#m8580d8e39476be565534d6ff5f5afa295fe8d4f7
> [2] https://lore.kernel.org/kvm/20240329212444.395559-3-michael.roth@xxxxxxx/T/#m3aeba660fcc991602820d3703b1265722b871025)
> 
> 
> > 
> > I'll do that when posting the next version of the patches in kvm-coco-queue.
> > 
> > Paolo
> > 
> 

-- 
Isaku Yamahata <isaku.yamahata@xxxxxxxxx>




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux