On Thu, Nov 21, 2019 at 08:14:35PM +0300, Dmitry Osipenko wrote: > 19.11.2019 19:56, Dmitry Osipenko пишет: > > 19.11.2019 09:25, Thierry Reding пишет: > >> On Mon, Nov 18, 2019 at 11:02:26PM +0300, Dmitry Osipenko wrote: > >>> Define interconnect IDs for memory controller (MC), external memory > >>> controller (EMC), external memory (EMEM) and memory clients of display > >>> controllers (DC). > >>> > >>> Signed-off-by: Dmitry Osipenko <digetx@xxxxxxxxx> > >>> --- > >>> include/dt-bindings/interconnect/tegra-icc.h | 11 +++++++++++ > >>> 1 file changed, 11 insertions(+) > >>> create mode 100644 include/dt-bindings/interconnect/tegra-icc.h > > > > > > Hello Thierry, > > > >> There was a bit of discussion regarding this for a recent patch that I > >> was working on, see: > >> > >> http://patchwork.ozlabs.org/project/linux-tegra/list/?series=140318 > > > > Thank you very much for the link. > > > >> I'd rather not use an additional set of definitions for this. The memory > >> controller already has a set of native IDs for memory clients that I > >> think we can reuse for this. > > > > I missed that it's fine to have multiple ICC connections defined > > per-path, at quick glance looks like indeed it should be fine to re-use > > MC IDs. > > Well, it is not quite correct to have multiple connections per-path. > > Please take look at interconnect's binding and core.c: > > 1. there should be one src->dst connection per-path > 2. each connection should comprise of one source and one destination nodes > > >> I've only added these client IDs for Tegra194 because that's where we > >> need it to actually describe a specific hardware quirk, but I can come > >> up with the equivalent for older chips as well. > > > > Older Tegra SoCs have hardware units connected to MC through AHB bus, > > like USB for example. These units do not have MC client IDs and there is > > no MC ID defined for the AHB bus either, but probably it won't be a > > problem to define IDs for them if will be necessary. > > > > Since interconnect binding requires to define both source and > destination nodes for the path, then MC IDs are not enough in order to > define interconnect path because these IDs represent only the source > nodes. Destination node should be either EMC or EMEM. This doesn't really map well to Tegra. The source of the path is always the device and the destination is always the memory controller. We also can have multiple paths between a device and the memory controller. The typical case is to have at least a read and a write path, but there are a number of devices that have multiple read and/or multiple write paths to the memory controller. Or perhaps I'm looking at this the wrong way, and what we really ought to describe is the paths with MC sitting in the middle. So it'd be something like: MC ID --- source ---> MC --- destination ---> EMC for write paths and: EMC --- source ---> MC --- destination ---> MC ID for read paths. I have no idea what would be a good connection ID for EMC, since I don't think MC really differentiates at that level. Perhaps #interconnect-cells = <0> for EMC would be appropriate. This would make the bindings look more like this, taking a random sample from the above series: ethernet@2490000 { ... interconnects = <&emc &mc TEGRA194_MEMORY_CLIENT_EQOSR>, <&mc TEGRA194_MEMORY_CLIENT_EQOSW &emc>; interconnect-names = "dma-mem", "dma-mem"; ... }; In words, the above would mean that for the ethernet device there is one path (a read slave interface) where data flows from the EMC through the MC to the device with memory client ID TEGRA194_MEMORY_CLIENT_EQOSR. The second path (a write slave interface) describes data flowing from the device (with memory client ID TEGRA194_MEMORY_CLIENT_EQOSW) through the MC and towards the EMC. Irrespective of the above, I think we definitely need to keep separate IDs for read and write paths because each of them have separate controls for arbitration and latency allowance. So each of those may need to be separately configurable. Does that make sense? Thierry
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel