Re: [PATCH 0/3] arm64: dts: qcom: sa8775p: Add interconnect to SMMU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2023-06-09 15:56, Dmitry Baryshkov wrote:
On Fri, 9 Jun 2023 at 17:52, Konrad Dybcio <konrad.dybcio@xxxxxxxxxx> wrote:



On 9.06.2023 16:45, Robin Murphy wrote:
On 2023-06-09 13:56, Parikshit Pareek wrote:
On Fri, Jun 09, 2023 at 10:52:26AM +0200, Konrad Dybcio wrote:


On 9.06.2023 07:41, Parikshit Pareek wrote:
Some qcom SoCs have SMMUs, which need the interconnect bandwidth to be
This series introduce the due support for associated interconnect, and
setting of the due interconnect-bandwidth. Setting due interconnect
bandwidth is needed to avoid the issues like [1], caused by not having
due clock votes(indirectly dependent upon interconnect bandwidth).

[1] ???

My bad. Intended to mention following:
https://lore.kernel.org/linux-arm-msm/20230418165224.vmok75fwcjqdxspe@echanude/

This sounds super-dodgy - do you really have to rely on configuration of the interconnect path from the SMMU's pagetable walker to RAM to keep a completely different interconnect path clocked for the CPU to access SMMU registers? You can't just request the programming interface clock directly like on other SoCs?
On Qualcomm platforms, particularly so with the more recent ones, some
clocks are managed by various remote cores. Half of what the interconnect
infra does on these SoCs is telling one such core to change the internally
managed clock's rate based on the requested bw.

But enabling PCIe interconnect to keep SMMU working sounds strange to
me too. Does the fault come from some outstanding PCIe transaction?

The "Injecting instruction/data abort to VM 3" message from the hypervisor implies that it is the access to SMMU_CR0 from arm_smmu_shutdown() that's blown up. I can even believe that the SMMU shares some clocks with the PCIe interconnect, given that its TBU must be *in* that path from PCIe to memory, at least. However I would instinctively expect the abstraction layers above to have some notion of distinct votes for "CPU wants to access SMMU" vs. "SMMU/PCIe wants to access RAM", given that the latter is liable to need to enable more than the former if the clock/power gating is as fine-grained as previous SoCs seem to have been. But maybe my hunch is wrong and this time everything's just in one big clock domain. I don't know. I'm just here to ask questions to establish whether this really is the most correct abstraction or just a lazy bodge to avoid doing the proper thing in some other driver.

Thanks,
Robin.



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux