+Robin On Tue, Feb 28, 2023 at 2:20 AM Manivannan Sadhasivam <manivannan.sadhasivam@xxxxxxxxxx> wrote: > > On Mon, Feb 27, 2023 at 01:55:35PM -0600, Rob Herring wrote: > > On Fri, Feb 24, 2023 at 04:28:55PM +0530, Manivannan Sadhasivam wrote: > > > Most of the PCIe controllers require iommu support to function properly. > > > So let's add them to the binding. > > > > > > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@xxxxxxxxxx> > > > --- > > > Documentation/devicetree/bindings/pci/qcom,pcie.yaml | 5 +++++ > > > 1 file changed, 5 insertions(+) > > > > > > diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml > > > index a3639920fcbb..f48d0792aa57 100644 > > > --- a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml > > > +++ b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml > > > @@ -64,6 +64,11 @@ properties: > > > > > > dma-coherent: true > > > > > > + iommus: > > > + maxItems: 1 > > > + > > > + iommu-map: true > > > + > > > > I think both properties together doesn't make sense unless the PCI host > > itself does DMA in addition to PCI bus devices doing DMA. > > > > How? With "iommus", we specify the SMR mask along with the starting SID and with > iommu-map, the individual SID<->BDF mapping is specified. This has nothing to > do with host DMA capabilities. I spoke with Robin offline and he agrees that having both is broken at least in RC mode. He pointed out the issue is similar to this one on Tegra[1]. Rob [1] https://lore.kernel.org/all/AS8P193MB2095640357779A7F9B6026F8D2A19@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/