On 25.09.2019 05:38, Georgi Djakov wrote: > Hi Leonard, > > On 9/16/19 05:34, Leonard Crestez wrote: >> On 23.08.2019 17:37, Leonard Crestez wrote: >>> This series add imx support for interconnect via devfreq: the ICC >>> framework is used to aggregate requests from devices and then those are >>> converted to DEV_PM_QOS_MIN_FREQUENCY requests for devfreq. >>> >>> Since there is no single devicetree node that can represent the "interconnect" >>> new API is added to allow individual devfreq nodes to act as parsing proxies >>> all mapping to a single soc-level icc provider. This is still RFC >>> because of this >> >> Any comments? I made a lot of changes relative to previous versions, >> most of them solely to avoid adding a virtual node in DT bindings. >> >> The only current interconnect provider implementation is for qcom and it >> uses a firmware node as the provider node (with #interconnect-cells). >> However there is no obvious equivalent of that for imx and many other SOCs. > > Not sure if it will help, but have you seen the qcs404 interconnect driver? > There is also mt8183 interconnect provider driver on LKML. Yes, but only yesterday. The qcs404 driver involves multiple DT devices so it seems closer to imx. As far as I understand from reading qcs404 source: * There is no struct device representing the entire graph. * There are multiple NOCs and each registers itself as a separate interconnect provider. * Each NOC registers multiple icc_nodes of various sorts: * Device masters and slaves * Some nodes representing NoC ports? * Multiple internal nodes * There is single per-SOC master list of QNOCs in the qcs404 driver. * The QNOCs can reference each other between multiple providers. * Each NOC registers an icc_provider and a subset of the graph. * The multiple NoC inside a chip are distinguished by compat strings. This seems strange, aren't they really different instantiations of the same IP with small config changes? This design is still quite odd, what would make sense to me is to register the "interconnect graph" once and then provide multiple "interconnect scalers" which handle the aggregated requests for certain specific nodes. >> On imx there are multiple pieces of scalable fabric which can be defined >> in DT as devfreq devices and it sort of makes sense to add >> #interconnect-cells to those. However when it comes to describing the >> SOC interconnect graph it's much more convenient to have a single >> per-SOC platform driver. > > Is all the NoC configuration done only by ATF? Are there any NoC related memory > mapped registers? Registers are memory-mapped and visible to the A-cores but should only be accessed through secure transactions. This means that configuration needs be done by ATF in EL3 (we don't support running linux in secure world on imx8m). There is no "remote processor" managing this on imx8m. On older imx6/7 chips we actually have two out-of-tree implementations of bus freq switching code: An older one in Linux (used when running in secure world) and a different one in optee for running Linux in non-secure world. NoC registers can be used to control some "transaction priority" bits but I don't want to expose that part right now. What determines bandwidth versus power consumption is the NoC clk rate and clocks are managed by Linux directly. DVFS on the RAM controller (DDRC) is also important. That component is only a bus slave and frequency switching requires a complex sequence inside ATF. -- Regards, Leonard