Hi,
On 6/9/2023 9:09 PM, Robin Murphy wrote:
On 2023-06-09 15:56, Dmitry Baryshkov wrote:
On Fri, 9 Jun 2023 at 17:52, Konrad Dybcio <konrad.dybcio@xxxxxxxxxx>
wrote:
On 9.06.2023 16:45, Robin Murphy wrote:
On 2023-06-09 13:56, Parikshit Pareek wrote:
On Fri, Jun 09, 2023 at 10:52:26AM +0200, Konrad Dybcio wrote:
On 9.06.2023 07:41, Parikshit Pareek wrote:
Some qcom SoCs have SMMUs, which need the interconnect bandwidth
to be
This series introduce the due support for associated
interconnect, and
setting of the due interconnect-bandwidth. Setting due interconnect
bandwidth is needed to avoid the issues like [1], caused by not
having
due clock votes(indirectly dependent upon interconnect bandwidth).
[1] ???
My bad. Intended to mention following:
https://lore.kernel.org/linux-arm-msm/20230418165224.vmok75fwcjqdxspe@echanude/
This sounds super-dodgy - do you really have to rely on
configuration of the interconnect path from the SMMU's pagetable
walker to RAM to keep a completely different interconnect path
clocked for the CPU to access SMMU registers? You can't just request
the programming interface clock directly like on other SoCs?
On Qualcomm platforms, particularly so with the more recent ones, some
clocks are managed by various remote cores. Half of what the
interconnect
infra does on these SoCs is telling one such core to change the
internally
managed clock's rate based on the requested bw.
But enabling PCIe interconnect to keep SMMU working sounds strange to
me too. Does the fault come from some outstanding PCIe transaction?
The "Injecting instruction/data abort to VM 3" message from the
hypervisor implies that it is the access to SMMU_CR0 from
arm_smmu_shutdown() that's blown up. I can even believe that the SMMU
shares some clocks with the PCIe interconnect, given that its TBU must
be *in* that path from PCIe to memory, at least. However I would
instinctively expect the abstraction layers above to have some notion of
distinct votes for "CPU wants to access SMMU" vs. "SMMU/PCIe wants to
access RAM", given that the latter is liable to need to enable more than
the former if the clock/power gating is as fine-grained as previous SoCs
seem to have been. But maybe my hunch is wrong and this time
everything's just in one big clock domain. I don't know. I'm just here
to ask questions to establish whether this really is the most correct
abstraction or just a lazy bodge to avoid doing the proper thing in some
other driver.
Thanks,
Robin.
For this platform to access the SMMU_CR0 we need to have pcie_tcu_clk
enabled and in order to do so we have to have interconnect vote from
MASTER_PCIE_[0/1] -> SLAVE_ANOC_PCIE_GEM_NOC so that AOP/RPMH can enable
aggre_noc_pcie_sf_clk_src which in turns enables bulk of clocks of which
pcie_tcu_clk is one.
---
|RAM|
------------ ----- ----------- ----------
| GEMNOC |<----| TBU |----| PCIE ANOC |<----| pcie_0/1 |
------------ ----- ----------- ----------
^ ^ ^
| | |
| v v
--- -----------------
|CPU| |PCIE TCU (smmuv2)|
--- -----------------
I think this should be the right driver to implement this to have a sync
with vote/unvote of the clock while the smmu register is being accessed
in arm_smmu_shutdown() right !
-Shazad