On 2022/3/19 01:27, Jason Gunthorpe wrote:
iommufd is the user API to control the IOMMU subsystem as it relates to managing IO page tables that point at user space memory. It takes over from drivers/vfio/vfio_iommu_type1.c (aka the VFIO container) which is the VFIO specific interface for a similar idea. We see a broad need for extended features, some being highly IOMMU device specific: - Binding iommu_domain's to PASID/SSID - Userspace page tables, for ARM, x86 and S390 - Kernel bypass'd invalidation of user page tables - Re-use of the KVM page table in the IOMMU - Dirty page tracking in the IOMMU - Runtime Increase/Decrease of IOPTE size - PRI support with faults resolved in userspace As well as a need to access these features beyond just VFIO, VDPA for instance, but other classes of accelerator HW are touching on these areas now too. The v1 series proposed re-using the VFIO type 1 data structure, however it was suggested that if we are doing this big update then we should also come with a data structure that solves the limitations that VFIO type1 has. Notably this addresses: - Multiple IOAS/'containers' and multiple domains inside a single FD - Single-pin operation no matter how many domains and containers use a page - A fine grained locking scheme supporting user managed concurrency for multi-threaded map/unmap - A pre-registration mechanism to optimize vIOMMU use cases by pre-pinning pages - Extended ioctl API that can manage these new objects and exposes domains directly to user space - domains are sharable between subsystems, eg VFIO and VDPA The bulk of this code is a new data structure design to track how the IOVAs are mapped to PFNs. iommufd intends to be general and consumable by any driver that wants to DMA to userspace. From a driver perspective it can largely be dropped in in-place of iommu_attach_device() and provides a uniform full feature set to all consumers. As this is a larger project this series is the first step. This series provides the iommfd "generic interface" which is designed to be suitable for applications like DPDK and VMM flows that are not optimized to specific HW scenarios. It is close to being a drop in replacement for the existing VFIO type 1. This is part two of three for an initial sequence: - Move IOMMU Group security into the iommu layer https://lore.kernel.org/linux-iommu/20220218005521.172832-1-baolu.lu@xxxxxxxxxxxxxxx/ * Generic IOMMUFD implementation - VFIO ability to consume IOMMUFD An early exploration of this is available here: https://github.com/luxis1999/iommufd/commits/iommufd-v5.17-rc6
Eric Auger and me have posted a QEMU rfc based on this branch. https://lore.kernel.org/kvm/20220414104710.28534-1-yi.l.liu@xxxxxxxxx/ -- Regards, Yi Liu