Sharing a physical PCI device in a finer-granularity way is becoming a consensus in the industry. IOMMU vendors are also engaging efforts to support such sharing as well as possible. Among the efforts, the capability of support finer-granularity DMA isolation is a common requirement due to the security consideration. With finer-granularity DMA isolation, all DMA requests out of or to a subset of a physical PCI device can be protected by the IOMMU. As a result, there is a request in software to attach multiple domains to a physical PCI device. One example of such use model is the Intel Scalable IOV [1] [2]. The Intel vt-d 3.0 spec [3] introduces the scalable mode which enables PASID granularity DMA isolation. This adds the APIs to support multiple domains per device. In order to ease the discussions, we call it 'a domain in auxiliary mode' or simply 'auxiliary domain' when multiple domains are attached to a physical device. The APIs includes: * iommu_get_dev_attr(dev, IOMMU_DEV_ATTR_AUXD_CAPABILITY) - Represents the ability of supporting multiple domains per device. * iommu_set_dev_attr(dev, IOMMU_DEV_ATTR_AUXD_ENABLE) - Enable the multiple domains capability for the device referenced by @dev. * iommu_set_dev_attr(dev, IOMMU_DEV_ATTR_AUXD_DISABLE) - Disable the multiple domains capability for the device referenced by @dev. * iommu_domain_get_attr(domain, DOMAIN_ATTR_AUXD_ID) - Return ID used for finer-granularity DMA translation. For the Intel Scalable IOV usage model, this will be a PASID. The device which supports Scalalbe IOV needs to writes this ID to the device register so that DMA requests could be tagged with a right PASID prefix. Many people involved in discussions of this design. Kevin Tian <kevin.tian@xxxxxxxxx> Liu Yi L <yi.l.liu@xxxxxxxxx> Ashok Raj <ashok.raj@xxxxxxxxx> Sanjay Kumar <sanjay.k.kumar@xxxxxxxxx> Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx> Alex Williamson <alex.williamson@xxxxxxxxxx> Jean-Philippe Brucker <jean-philippe.brucker@xxxxxxx> and some discussions can be found here [4]. [1] https://software.intel.com/en-us/download/intel-scalable-io-virtualization-technical-specification [2] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf [3] https://software.intel.com/en-us/download/intel-virtualization-technology-for-directed-io-architecture-specification [4] https://lkml.org/lkml/2018/7/26/4 Cc: Ashok Raj <ashok.raj@xxxxxxxxx> Cc: Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx> Cc: Kevin Tian <kevin.tian@xxxxxxxxx> Cc: Liu Yi L <yi.l.liu@xxxxxxxxx> Suggested-by: Kevin Tian <kevin.tian@xxxxxxxxx> Suggested-by: Jean-Philippe Brucker <jean-philippe.brucker@xxxxxxx> Signed-off-by: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx> --- drivers/iommu/iommu.c | 25 +++++++++++++++++++++++++ include/linux/iommu.h | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 58 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 8c15c5980299..d06cfdcf38a7 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2014,3 +2014,28 @@ int iommu_fwspec_add_ids(struct device *dev, u32 *ids, int num_ids) return 0; } EXPORT_SYMBOL_GPL(iommu_fwspec_add_ids); + +/* + * Generic interfaces to get or set per device IOMMU attributions. + */ +int iommu_get_dev_attr(struct device *dev, enum iommu_dev_attr attr, void *data) +{ + const struct iommu_ops *ops = dev->bus->iommu_ops; + + if (ops && ops->get_dev_attr) + return ops->get_dev_attr(dev, attr, data); + + return -EINVAL; +} +EXPORT_SYMBOL_GPL(iommu_get_dev_attr); + +int iommu_set_dev_attr(struct device *dev, enum iommu_dev_attr attr, void *data) +{ + const struct iommu_ops *ops = dev->bus->iommu_ops; + + if (ops && ops->set_dev_attr) + return ops->set_dev_attr(dev, attr, data); + + return -EINVAL; +} +EXPORT_SYMBOL_GPL(iommu_set_dev_attr); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 87994c265bf5..0230b64cc6e9 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -125,6 +125,7 @@ enum iommu_attr { DOMAIN_ATTR_FSL_PAMUV1, DOMAIN_ATTR_NESTING, /* two stages of translation */ DOMAIN_ATTR_MAX, + DOMAIN_ATTR_AUXD_ID, }; /* These are the possible reserved region types */ @@ -155,6 +156,13 @@ struct iommu_resv_region { enum iommu_resv_type type; }; +/* Per device IOMMU attributions */ +enum iommu_dev_attr { + IOMMU_DEV_ATTR_AUXD_CAPABILITY, + IOMMU_DEV_ATTR_AUXD_ENABLE, + IOMMU_DEV_ATTR_AUXD_DISABLE, +}; + #ifdef CONFIG_IOMMU_API /** @@ -184,6 +192,8 @@ struct iommu_resv_region { * @domain_set_windows: Set the number of windows for a domain * @domain_get_windows: Return the number of windows for a domain * @of_xlate: add OF master IDs to iommu grouping + * @get_dev_attr: get per device IOMMU attributions + * @set_dev_attr: set per device IOMMU attributions * @pgsize_bitmap: bitmap of all possible supported page sizes */ struct iommu_ops { @@ -231,6 +241,12 @@ struct iommu_ops { int (*of_xlate)(struct device *dev, struct of_phandle_args *args); bool (*is_attach_deferred)(struct iommu_domain *domain, struct device *dev); + /* Get/set per device IOMMU attributions */ + int (*get_dev_attr)(struct device *dev, + enum iommu_dev_attr attr, void *data); + int (*set_dev_attr)(struct device *dev, + enum iommu_dev_attr attr, void *data); + unsigned long pgsize_bitmap; }; @@ -400,6 +416,11 @@ void iommu_fwspec_free(struct device *dev); int iommu_fwspec_add_ids(struct device *dev, u32 *ids, int num_ids); const struct iommu_ops *iommu_ops_from_fwnode(struct fwnode_handle *fwnode); +int iommu_get_dev_attr(struct device *dev, + enum iommu_dev_attr attr, void *data); +int iommu_set_dev_attr(struct device *dev, + enum iommu_dev_attr attr, void *data); + #else /* CONFIG_IOMMU_API */ struct iommu_ops {}; @@ -684,6 +705,18 @@ const struct iommu_ops *iommu_ops_from_fwnode(struct fwnode_handle *fwnode) return NULL; } +static inline int +iommu_get_dev_attr(struct device *dev, enum iommu_dev_attr attr, void *data) +{ + return -EINVAL; +} + +static inline int +iommu_set_dev_attr(struct device *dev, enum iommu_dev_attr attr, void *data) +{ + return -EINVAL; +} + #endif /* CONFIG_IOMMU_API */ #ifdef CONFIG_IOMMU_DEBUGFS -- 2.17.1