Re: [PATCH v7 3/3] iommu/tegra-smmu: Add PCI support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-10-17 02:56, Nicolin Chen wrote:
On Fri, Oct 16, 2020 at 03:10:26PM +0100, Robin Murphy wrote:
On 2020-10-16 04:53, Nicolin Chen wrote:
On Thu, Oct 15, 2020 at 10:55:52AM +0100, Robin Murphy wrote:
On 2020-10-15 05:13, Nicolin Chen wrote:
On Wed, Oct 14, 2020 at 06:42:36PM +0100, Robin Murphy wrote:
On 2020-10-09 17:19, Nicolin Chen wrote:
This patch simply adds support for PCI devices.

Reviewed-by: Dmitry Osipenko <digetx@xxxxxxxxx>
Tested-by: Dmitry Osipenko <digetx@xxxxxxxxx>
Signed-off-by: Nicolin Chen <nicoleotsuka@xxxxxxxxx>
---

Changelog
v6->v7
     * Renamed goto labels, suggested by Thierry.
v5->v6
     * Added Dmitry's Reviewed-by and Tested-by.
v4->v5
     * Added Dmitry's Reviewed-by
v3->v4
     * Dropped !iommu_present() check
     * Added CONFIG_PCI check in the exit path
v2->v3
     * Replaced ternary conditional operator with if-else in .device_group()
     * Dropped change in tegra_smmu_remove()
v1->v2
     * Added error-out labels in tegra_smmu_probe()
     * Dropped pci_request_acs() since IOMMU core would call it.

     drivers/iommu/tegra-smmu.c | 35 +++++++++++++++++++++++++----------
     1 file changed, 25 insertions(+), 10 deletions(-)

diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index be29f5977145..2941d6459076 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -10,6 +10,7 @@
     #include <linux/kernel.h>
     #include <linux/of.h>
     #include <linux/of_device.h>
+#include <linux/pci.h>
     #include <linux/platform_device.h>
     #include <linux/slab.h>
     #include <linux/spinlock.h>
@@ -865,7 +866,11 @@ static struct iommu_group *tegra_smmu_device_group(struct device *dev)
     	group->smmu = smmu;
     	group->soc = soc;
-	group->group = iommu_group_alloc();
+	if (dev_is_pci(dev))
+		group->group = pci_device_group(dev);

Just to check, is it OK to have two or more swgroups "owning" the same
iommu_group if an existing one gets returned here? It looks like that might
not play nice with the use of iommu_group_set_iommudata().

Do you mean by "gets returned here" the "IS_ERR" check below?

I mean that unlike iommu_group_alloc()/generic_device_group(),
pci_device_group() may give you back a group that already contains another
device and has already been set up from that device's perspective. This can
happen for topological reasons like requester ID aliasing through a PCI-PCIe
bridge or lack of isolation between functions.

Okay..but we don't really have two swgroups owning the same groups
in case of PCI devices. For Tegra210, all PCI devices inherit the
same swgroup from the PCI controller. And I'd think previous chips
do the same. The only use case currently of 2+ swgroups owning the
same iommu_group is for display controller.

Or do you suggest we need an additional check for pci_device_group?

Ah, OK - I still don't have the best comprehension of what exactly swgroups

The "swgroup" stands for "software group", which associates with
an ASID (Address Space Identifier) for SMMU to determine whether
this client is going through SMMU translation or not.

So in Arm SMMU analogy terms it's more like a context bank index than a stream ID - got it.

are, and the path through .of_xlate looked like you might be using the PCI
requester ID as the swgroup identifier, but I guess that ultimately depends
on what your "iommu-map" is supposed to look like. If pci_device_group()

This is copied from pcie node in our downstream dtsi file:

		iommus = <&mc TEGRA_SWGROUP_AFI>;
		iommu-map = <0x0 &mc TEGRA_SWGROUP_AFI 0x1000>;
		iommu-map-mask = <0x0>;

Aha, indeed that iommu-map-mask is the trick :)

will effectively only ever get called once regardless of how many endpoints
exist, then indeed this won't be a concern (although if that's *guaranteed*
to be the case then you may as well just stick with calling
iommu_group_alloc() directly). Thanks for clarifying.

All PCI devices are supposed to get this swgroup of the pcie node
above. So the function will return the existing group of the pcie
controller, before giving a chance to call iommu_group_alloc().

Yes, the "iommus" property will mean that the group always gets created first for the platform device owning the host bridge, and thus won't be visible to pci_device_group() anyway. Subsequent tegra_smmu_group_get() calls for the PCI devices (including the PCI side of the host bridge itself) are then going to match TEGRA_SWGROUP_AFI in the smmu->groups list lookup and return early, so the dev_is_pci() condition will never be true, and the call to pci_device_group() is in fact entirely dead code.

(I was assuming a case where you didn't have the "iommus" property, in which case you would reach that path exactly once for the first PCI device probed, wherein pci_device_group() is still only going to fall through to calling iommu_group_alloc() anyway).

Cheers,
Robin.



[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux