Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Varada,

Thank you for your work on this!

On 2.05.24 12:30, Varadarajan Narayanan wrote:
On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:


On 4/18/24 11:23, Varadarajan Narayanan wrote:
IPQ SoCs dont involve RPM in managing NoC related clocks and
there is no NoC scaling. Linux itself handles these clocks.
However, these should not be exposed as just clocks and align
with other Qualcomm SoCs that handle these clocks from a
interconnect provider.

Hence include icc provider capability to the gcc node so that
peripherals can use the interconnect facility to enable these
clocks.

Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@xxxxxxxxxx>
Signed-off-by: Varadarajan Narayanan <quic_varada@xxxxxxxxxxx>
---

If this is all you do to enable interconnect (which is not the case,
as this patch only satisfies the bindings checker, the meaningful
change happens in the previous patch) and nothing explodes, this is
an apparent sign of your driver doing nothing.

It appears to do nothing because, we are just enabling the clock
provider to also act as interconnect provider. Only when the
consumers are enabled with interconnect usage, this will create
paths and turn on the relevant NOC clocks.

No, with sync_state it actually does "something" (sets the interconnect
path bandwidths to zero). And *this* patch does nothing functionally,
it only makes the dt checker happy.

I understand.

This interconnect will be used by the PCIe and NSS blocks. When
those patches were posted earlier, they were put on hold until
interconnect driver is available.

Once this patch gets in, PCIe for example will make use of icc.
Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@xxxxxxxxxxx/.

The 'pcieX' nodes will include the following entries.

	interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
			<&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
	interconnect-names = "pcie-mem", "cpu-pcie";

Okay. What about USB that's already enabled? And BIMC/MEMNOC?

For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
clock. Hence, interconnect is not specified there.

MEMNOC to System NOC interfaces seem to be enabled automatically.
Software doesn't have to turn on or program specific clocks.

The expected reaction to "enabling interconnect" without defining the
required paths for your hardware would be a crash-on-sync_state, as all
unused (from Linux's POV) resources ought to be shut down.

Because you lack sync_state, the interconnects silently retain the state
that they were left in (which is not deterministic), and that's precisely
what we want to avoid.

I tried to set 'sync_state' to icc_sync_state to be invoked and
didn't see any crash.

Have you confirmed that the registers are actually written to, and with
correct values?

I tried the following combinations:-

1. Top of tree linux-next + This patch set

	* icc_sync_state called
	* No crash or hang observed
	* From /sys/kernel/debug/clk/clk_summary can see the
	  relevant clocks are set to the expected rates (compared
	  with downstream kernel)

2. Top of tree linux-next + This patch set + PCIe enablement

	* icc_sync_state NOT called

If sync_state() is not being called, that usually means that there
are interconnect consumers that haven't probed successfully (PCIe?)
or their dependencies. That can be checked in /sys/class/devlink/.../status
But i am not sure how this works for PCI devices however.

You can also manually force a call to sync_state by writing "1" to
the interconnect provider's /sys/devices/.../state_synced

Anyway, the question is if PCIe and NSS work without this driver?
If they work, is this because the clocks are turned on by default
or by the boot loader?

Then if an interconnect path (clock) gets disabled either when we
reach a sync_state (with no bandwidth requests) or we explicitly
call icc_set_bw() with 0 bandwidth values, i would expect that
these PCIe and NSS devices would not function anymore (it might
save some power etc) and if this is unexpected we should see a
a crash or hang...

Can you confirm this?

Thanks,
Georgi

	* No crash or hang observed
	* From /sys/kernel/debug/clk/clk_summary can see the
	  relevant clocks are set to the expected rates (compared
	  with downstream kernel)

Does this answer your question? Please let me know if you were
looking for some other information.

Thanks
Varada






[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux