On 9.06.2023 17:07, Robin Murphy wrote: > On 2023-06-09 15:52, Konrad Dybcio wrote: >> >> >> On 9.06.2023 16:45, Robin Murphy wrote: >>> On 2023-06-09 13:56, Parikshit Pareek wrote: >>>> On Fri, Jun 09, 2023 at 10:52:26AM +0200, Konrad Dybcio wrote: >>>>> >>>>> >>>>> On 9.06.2023 07:41, Parikshit Pareek wrote: >>>>>> Some qcom SoCs have SMMUs, which need the interconnect bandwidth to be >>>>>> This series introduce the due support for associated interconnect, and >>>>>> setting of the due interconnect-bandwidth. Setting due interconnect >>>>>> bandwidth is needed to avoid the issues like [1], caused by not having >>>>>> due clock votes(indirectly dependent upon interconnect bandwidth). >>>>> >>>>> [1] ??? >>>> >>>> My bad. Intended to mention following: >>>> https://lore.kernel.org/linux-arm-msm/20230418165224.vmok75fwcjqdxspe@echanude/ >>> >>> This sounds super-dodgy - do you really have to rely on configuration of the interconnect path from the SMMU's pagetable walker to RAM to keep a completely different interconnect path clocked for the CPU to access SMMU registers? You can't just request the programming interface clock directly like on other SoCs? >> On Qualcomm platforms, particularly so with the more recent ones, some >> clocks are managed by various remote cores. Half of what the interconnect >> infra does on these SoCs is telling one such core to change the internally >> managed clock's rate based on the requested bw. > > That much I get, it just seems like an arse-backwards design decision if it's really necessary to pretend the SMMU needs to access memory in order for the CPU to be able to access the SMMU. The respective SMMU interfaces are functionally independent of each other - even if it is the case in the integration that both interfaces and/or the internal TCU clock do happen to be driven synchronously from the same parent clock - and in any sane interconnect the CPU->SMMU and SMMU->RAM routes would be completely different and not intersect at all. Well, it's not the first time we stumble into a.. peculiar.. design decision on these SoCs.. That said, we can't do much about it now.. On older SoCs, some interconnect paths were strongly associated with specific TBUs which were responsible for specific SID ranges.. In this specific case, it looks like SIDs 0x000-0x3ff should correspond to PCIE0 and 0x400-0x7ff to PCIE1. But the line isn't drawn very clearly this time around, so maybe there's some internal spaghetti. Konrad > > Thanks, > Robin.