NVIDIA's Tegra241 (Grace) SoC has a CMDQ-Virtualization (CMDQV) hardware that extends standard ARM SMMUv3 to support multiple command queues with virtualization capabilities. Though this is similar to the ECMDQ in SMMU v3.3, CMDQV provides additional Virtual Interfaces (VINTFs) allowing VMs to have their own VINTFs and Virtual Command Queues (VCMDQs). The VCMDQs can only execute a limited set of commands, mainly invalidation commands when exclusively used by the VMs, compared to the standard SMMUv3 CMDQ. Thus, there are two parts of patch series to add its support: the basic in-kernel support as part 1, and the user-space support as part 2. The in-kernel support is to detect/configure the CMDQV hardware and then allocate a VINTF with some VCMDQs for the kernel/hypervisor to use. Like ECMDQ, CMDQV also allows the kernel to use multiple VCMDQs, giving some limited performance improvement: up to 20% reduction of TLB invalidation time was measured by a multi-threaded DMA unmap benchmark, compared to a single queue. The user-space support is to provide uAPIs (via IOMMUFD) for hypervisors in user space to passthrough VCMDQs to VMs, allowing these VMs to access the VCMDQs directly without trappings, i.e. no VM Exits. This gives huge performance improvements: 70% to 90% reductions of TLB invalidation time were measured by various DMA unmap tests running in a guest OS, compared to a nested SMMU CMDQ (with trappings). This is the part-1 series: - Preparatory changes to share the existing SMMU functions - A new CMDQV driver and extending the SMMUv3 driver to interact with the new driver - Limit the commands for a guest kernel. It's available on Github: https://github.com/nicolinc/iommufd/commits/vcmdq_in_kernel-v5 And the part-2 RFC series is also prepared and will be sent soon: https://github.com/nicolinc/iommufd/commits/vcmdq_user_space-rfc-v1/ Note that this in-kernel support isn't confined to host kernels running on Grace-powered servers, but is also used by guest kernels running on VMs virtualized on those servers. So, those VMs must install the driver, ideally before the part 2 is merged. So, later those servers would only need to upgrade their host kernels without bothering the VMs. Thank you! Changelog v5: * Improved print/mmio helpers * Added proper register reset routines * Reorganized init/deinit functions to share with VIOMMU callbacks in the upcoming part-2 user-space series (RFC) v4: https://lore.kernel.org/all/cover.1711690673.git.nicolinc@xxxxxxxxxx/ * Rebased on v6.9-rc1 * Renamed to "tegra241-cmdqv", following other Grace kernel patches * Added a set of print and MMIO helpers * Reworked the guest limitation patch v3: https://lore.kernel.org/all/20211119071959.16706-1-nicolinc@xxxxxxxxxx/ * Dropped VMID and mdev patches to redesign later based on IOMMUFD * Separated HYP_OWN part for guest support into a new patch * Added new preparatory changes v2: https://lore.kernel.org/all/20210831025923.15812-1-nicolinc@xxxxxxxxxx/ * Added mdev interface support for hypervisor and VMs * Added preparatory changes for mdev interface implementation * PATCH-12 Changed ->issue_cmdlist() to ->get_cmdq() for a better integration with recently merged ECMDQ-related changes v1: https://lore.kernel.org/all/20210723193140.9690-1-nicolinc@xxxxxxxxxx/ Nate Watterson (1): iommu/arm-smmu-v3: Add in-kernel support for NVIDIA Tegra241 (Grace) CMDQV Nicolin Chen (5): iommu/arm-smmu-v3: Add CS_NONE quirk iommu/arm-smmu-v3: Make arm_smmu_cmdq_init reusable iommu/arm-smmu-v3: Make __arm_smmu_cmdq_skip_err reusable iommu/arm-smmu-v3: Pass in cmdq pointer to arm_smmu_cmdq_issue_cmdlist() iommu/tegra241-cmdqv: Limit CMDs for guest owned VINTF MAINTAINERS | 1 + drivers/iommu/Kconfig | 12 + drivers/iommu/arm/arm-smmu-v3/Makefile | 1 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 74 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 46 + .../iommu/arm/arm-smmu-v3/tegra241-cmdqv.c | 845 ++++++++++++++++++ 6 files changed, 952 insertions(+), 27 deletions(-) create mode 100644 drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c -- 2.43.0