> From: Andy Lutomirski <luto@xxxxxxxxxx> > Sent: Wednesday, September 1, 2021 12:53 PM > > On Thu, Aug 26, 2021, at 7:31 PM, Yu Zhang wrote: > > On Thu, Aug 26, 2021 at 12:15:48PM +0200, David Hildenbrand wrote: > > > Thanks a lot for this summary. A question about the requirement: do we or > > do we not have plan to support assigned device to the protected VM? > > > > If yes. The fd based solution may need change the VFIO interface as well( > > though the fake swap entry solution need mess with VFIO too). Because: > > > > 1> KVM uses VFIO when assigning devices into a VM. > > > > 2> Not knowing which GPA ranges may be used by the VM as DMA buffer, > all > > guest pages will have to be mapped in host IOMMU page table to host > pages, > > which are pinned during the whole life cycle fo the VM. > > > > 3> IOMMU mapping is done during VM creation time by VFIO and IOMMU > driver, > > in vfio_dma_do_map(). > > > > 4> However, vfio_dma_do_map() needs the HVA to perform a GUP to get > the HPA > > and pin the page. > > > > But if we are using fd based solution, not every GPA can have a HVA, thus > > the current VFIO interface to map and pin the GPA(IOVA) wont work. And I > > doubt if VFIO can be modified to support this easily. > > > > > > Do you mean assigning a normal device to a protected VM or a hypothetical > protected-MMIO device? > > If the former, it should work more or less like with a non-protected VM. > mmap the VFIO device, set up a memslot, and use it. I'm not sure whether > anyone will actually do this, but it should be possible, at least in principle. > Maybe someone will want to assign a NIC to a TDX guest. An NVMe device > with the understanding that the guest can't trust it wouldn't be entirely crazy > ether. > > If the latter, AFAIK there is no spec for how it would work even in principle. > Presumably it wouldn't work quite like VFIO -- instead, the kernel could have > a protection-virtual-io-fd mechanism, and that fd could be bound to a > memslot in whatever way we settle on for binding secure memory to a > memslot. FYI the iommu logic in VFIO is being refactored out into an unified /dev/iommu framework [1]. Currently it plans to support the same DMA mapping semantics as what VFIO provides today (HVA-based). in the future it could be extended to support another mapping protocol which accepts fd+offset instead of HVA and then calls the helper function from whatever backing store which can help translate fd+offset to HPA instead of using GUP. Thanks Kevin [1]https://lore.kernel.org/kvm/BN9PR11MB5433B1E4AE5B0480369F97178C189@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/