On Fri, May 02, 2014 at 05:19:44PM +0200, Arnd Bergmann wrote: > On Friday 02 May 2014 15:23:29 Thierry Reding wrote: > > On Fri, May 02, 2014 at 02:32:08PM +0200, Arnd Bergmann wrote: > > > On Friday 02 May 2014 13:05:58 Thierry Reding wrote: > > > > > > > > Let me see if I understood the above proposal by trying to translate it > > > > into a simple example for a specific use-case. On Tegra for example we > > > > have various units that can either access system memory directly or use > > > > the IOMMU to translate accesses for them. One such unit would be the > > > > display controller that scans out a framebuffer from memory. > > > > > > Can you explain how the decision is made whether the IOMMU gets used > > > or not? In all cases I've seen so far, I think we can hardwire this > > > in DT, and only expose one or the other. Are both ways used > > > concurrently? > > > > It should be possible to hardcode this in DT for Tegra. As I understand > > it, both interfaces can't be used at the same time. Once translation has > > been enabled for one client, all accesses generated by that client will > > be translated. > > Ok. > > > > > dc@0,54200000 { > > > > ... > > > > > > > > slave { > > > > /* > > > > * 2 is the memory controller client ID of the > > > > * display controller. > > > > */ > > > > iommu = <&iommu 2>; > > > > > > > > ... > > > > }; > > > > }; > > > > > > > > Admittedly this is probably a lot more trivial than what you're looking > > > > for. There's no need for virtualization here, the IOMMU is simply used > > > > to isolate memory accesses by devices. Still it's a use-case that needs > > > > to be supported and one that at least Tegra and Exynos have an immediate > > > > need for. > > > > > > > > So the above isn't much different from the proposed bindings, except > > > > that the iommu property is now nested within a slave node. I guess this > > > > gives us a lot more flexibility to extend the description of a slave as > > > > needed to represent more complex scenarios. > > > > > > This looks rather complicated to parse automatically in the generic > > > DT code when we try to decide which dma_map_ops to use. We'd have > > > to look for 'slave' nodes in each device we instatiate and then see > > > if they use an iommu or not. > > > > But we need to do that now anyway in order to find an iommu property, > > don't we? Adding one extra level here shouldn't be all that bad if it > > gives us more flexibility or uniformity with more complicated setups. > > The common code just needs to know whether an IOMMU is in use or > not, and what the mask/offset are. > > > To some degree this also depends on how we want to handle IOMMUs. If > > they should remain transparently handled via dma_map_ops, then it makes > > sense to set this up at device instantiation time. But how can we handle > > this in situations where one device needs to master on two IOMMUs at the > > same time? Or if the device needs physically contiguous memory for > > purposes other than device I/O. Using dma_map_ops we can't control which > > allocations get mapped via the IOMMU and which don't. > > I still hope we can handle this in common code by selecting the right > dma_map_ops when the devices are instantiated, at least for 99% of the > cases. I'm not convinced we really need to handle the 'multiple IOMMUs > on one device' case in a generic way. If there are no common use cases > for that, we can probably get away with having multiple device nodes > and an ugly driver for the exception, instead of making life complicated > for everybody. Multiple IOMMUs certainly seems an unusual case for now. Being able to describe that in the DT doesn't necessarily mean the kernel has to support it: just as the kernel doesn't need to support all the features of a crazy hardware platform just someone was crazy enough to build it. My expectation was that we do some check when probing a device to figure out the path from the device to main memory, thus figuring out the dma mask, the IOMMU (if any) and any relevant device ID. This is a bit more complex than the existing situation, but I still think we could have common code for the bulk of it. If a device has different roles with completely different paths to memory, one option could be for the driver to instantiate two devices in the kernel. This puts the burden on the driver for the device, instead of the core framework. > > > > > Also, are slaves/slave-names and slave subnodes mutually exclusive? It > > > > sounds like slaves/slave-names would be a specialization of the slave > > > > subnode concept for the trivial cases. Would the following be an > > > > equivalent description of the above example? > > > > > > > > dc@0,54200000 { > > > > ... > > > > > > > > slaves = <&iommu 2>; > > > > }; > > > > > > > > I don't see how it could be exactly equivalent since it misses context > > > > regarding the type of slave that's being interacted with. Perhaps that > > > > could be solved by making that knowledge driver-specific (i.e. the > > > > driver for the Tegra display controller will know that it can only be > > > > the master on an IOMMU and therefore derive the slave type). Or the > > > > slave's type could be derived from the slave-names property. > > > > > > I'd rather have a device-specific property that tells the driver > > > about things the iommu driver doesn't need to know but the master > > > does. In most cases, we should be fine without a name attached to the > > > slave. > > > > For the easy cases where we either have no IOMMU or a single IOMMU per > > device, that should work fine. This only becomes problematic when there > > are more than one, since you need to distinguish between possibly more > > than one type. > > > > As I understand it, Dave's proposal is for generic bus masters, which > > may be an IOMMU but could also be something completely different. So in > > those cases we need extra meta information so that we can look up the > > proper type of object. > > Doing something complicated for the IOMMUs themselves seems fine, also > for other nonstandard devices that are just weird. I just want to > handle the simple case automatically. Agreed. Ideally, the simple cases should be completely handled by framework: the complex cases will have to fend for themselves, unless a clear pattern emerges in which case frameworks to handle them could be created in the future. Cheers ---Dave -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html