Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote:
> On 11.06.24 12:42, Varadarajan Narayanan wrote:
> > On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote:
> > > On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote:
> > > > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
> > > > <quic_varada@xxxxxxxxxxx> wrote:
> > > > >
> > > > > On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
> > > > > > Hi Varada,
> > > > > >
> > > > > > Thank you for your work on this!
> > > > > >
> > > > > > On 2.05.24 12:30, Varadarajan Narayanan wrote:
> > > > > > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
> > > > > > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
> > > > > > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote:
> > > > > > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and
> > > > > > > > > > > there is no NoC scaling. Linux itself handles these clocks.
> > > > > > > > > > > However, these should not be exposed as just clocks and align
> > > > > > > > > > > with other Qualcomm SoCs that handle these clocks from a
> > > > > > > > > > > interconnect provider.
> > > > > > > > > > >
> > > > > > > > > > > Hence include icc provider capability to the gcc node so that
> > > > > > > > > > > peripherals can use the interconnect facility to enable these
> > > > > > > > > > > clocks.
> > > > > > > > > > >
> > > > > > > > > > > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@xxxxxxxxxx>
> > > > > > > > > > > Signed-off-by: Varadarajan Narayanan <quic_varada@xxxxxxxxxxx>
> > > > > > > > > > > ---
> > > > > > > > > >
> > > > > > > > > > If this is all you do to enable interconnect (which is not the case,
> > > > > > > > > > as this patch only satisfies the bindings checker, the meaningful
> > > > > > > > > > change happens in the previous patch) and nothing explodes, this is
> > > > > > > > > > an apparent sign of your driver doing nothing.
> > > > > > > > >
> > > > > > > > > It appears to do nothing because, we are just enabling the clock
> > > > > > > > > provider to also act as interconnect provider. Only when the
> > > > > > > > > consumers are enabled with interconnect usage, this will create
> > > > > > > > > paths and turn on the relevant NOC clocks.
> > > > > > > >
> > > > > > > > No, with sync_state it actually does "something" (sets the interconnect
> > > > > > > > path bandwidths to zero). And *this* patch does nothing functionally,
> > > > > > > > it only makes the dt checker happy.
> > > > > > >
> > > > > > > I understand.
> > > > > > >
> > > > > > > > > This interconnect will be used by the PCIe and NSS blocks. When
> > > > > > > > > those patches were posted earlier, they were put on hold until
> > > > > > > > > interconnect driver is available.
> > > > > > > > >
> > > > > > > > > Once this patch gets in, PCIe for example will make use of icc.
> > > > > > > > > Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@xxxxxxxxxxx/.
> > > > > > > > >
> > > > > > > > > The 'pcieX' nodes will include the following entries.
> > > > > > > > >
> > > > > > > > >          interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> > > > > > > > >                          <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
> > > > > > > > >          interconnect-names = "pcie-mem", "cpu-pcie";
> > > > > > > >
> > > > > > > > Okay. What about USB that's already enabled? And BIMC/MEMNOC?
> > > > > > >
> > > > > > > For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
> > > > > > > clock. Hence, interconnect is not specified there.
> > > > > > >
> > > > > > > MEMNOC to System NOC interfaces seem to be enabled automatically.
> > > > > > > Software doesn't have to turn on or program specific clocks.
> > > > > > >
> > > > > > > > > > The expected reaction to "enabling interconnect" without defining the
> > > > > > > > > > required paths for your hardware would be a crash-on-sync_state, as all
> > > > > > > > > > unused (from Linux's POV) resources ought to be shut down.
> > > > > > > > > >
> > > > > > > > > > Because you lack sync_state, the interconnects silently retain the state
> > > > > > > > > > that they were left in (which is not deterministic), and that's precisely
> > > > > > > > > > what we want to avoid.
> > > > > > > > >
> > > > > > > > > I tried to set 'sync_state' to icc_sync_state to be invoked and
> > > > > > > > > didn't see any crash.
> > > > > > > >
> > > > > > > > Have you confirmed that the registers are actually written to, and with
> > > > > > > > correct values?
> > > > > > >
> > > > > > > I tried the following combinations:-
> > > > > > >
> > > > > > > 1. Top of tree linux-next + This patch set
> > > > > > >
> > > > > > >      * icc_sync_state called
> > > > > > >      * No crash or hang observed
> > > > > > >      * From /sys/kernel/debug/clk/clk_summary can see the
> > > > > > >        relevant clocks are set to the expected rates (compared
> > > > > > >        with downstream kernel)
> > > > > > >
> > > > > > > 2. Top of tree linux-next + This patch set + PCIe enablement
> > > > > > >
> > > > > > >      * icc_sync_state NOT called
> > > > > >
> > > > > > If sync_state() is not being called, that usually means that there
> > > > > > are interconnect consumers that haven't probed successfully (PCIe?)
> > > > > > or their dependencies. That can be checked in /sys/class/devlink/.../status
> > > > > > But i am not sure how this works for PCI devices however.
> > > > > >
> > > > > > You can also manually force a call to sync_state by writing "1" to
> > > > > > the interconnect provider's /sys/devices/.../state_synced
> > > > > >
> > > > > > Anyway, the question is if PCIe and NSS work without this driver?
> > > > >
> > > > > No.
> > > > >
> > > > > > If they work, is this because the clocks are turned on by default
> > > > > > or by the boot loader?
> > > > >
> > > > > Initially, the PCIe/NSS driver enabled these clocks directly
> > > > > by having them in their DT nodes itself. Based on community
> > > > > feedback this was removed and after that PCIe/NSS did not work.
> > > > >
> > > > > > Then if an interconnect path (clock) gets disabled either when we
> > > > > > reach a sync_state (with no bandwidth requests) or we explicitly
> > > > > > call icc_set_bw() with 0 bandwidth values, i would expect that
> > > > > > these PCIe and NSS devices would not function anymore (it might
> > > > > > save some power etc) and if this is unexpected we should see a
> > > > > > a crash or hang...
> > > > > >
> > > > > > Can you confirm this?
> > > > >
> > > > > With ICC enabled, icc_set_bw (with non-zero values) is called by
> > > > > PCIe and NSS drivers. Haven't checked with icc_set_bw with zero
> > > > > values.
> > > > >
> > > > > PCIe:   qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw
> > > > > NSS:    ppe_icc_init -> icc_set_bw
> > > > >
> > > > > I believe sync_state is not getting called since there is a
> > > > > non-zero set bandwidth request. Which seems to be aligned with
> > > > > your explanation.
> > > >
> > > > This doesn't look correct. sync_state is being called once all
> > > > consumers are probed. It doesn't matter whether those consumers have
> > > > non-zero bandwidth requests or no.
> > >
> > > /sys/kernel/debug/devices_deferred may have some useful info, too
> >
> > /sys/kernel/debug/devices_deferred seems to be empty
> >
> > 	# mount | grep -w debugfs
> > 	none on /sys/kernel/debug type debugfs (rw,relatime)
> >
> > 	# cat /sys/kernel/debug/devices_deferred  | wc -l
> > 	0
> >
> > Added the following print to icc_sync_state,
> >
> > 	@@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev)
> > 		struct icc_node *n;
> > 		static int count;
> >
> > 	+	printk("--> %s: %d %d\n", __func__, providers_count, count);
> > 		count++;
> >
> > 		if (count < providers_count)
> > 			return;
> >
> > icc_sync_state seems to be called once,
> >
> > 	# dmesg | grep icc_sync_state
> > 	[   12.260544] --> icc_sync_state: 2 0
> >
> > Since 'providers_count' is greated than 'count' icc_sync_state
> > seems to return before doing anything.
>
> Is there also another interconnect provider on this platform, other
> than the gcc? Check for DT nodes that have the #interconnect-cells
> property.

Yes there are two interconnect providers

	# find /proc/device-tree/ -name '#interconnect-cells'
	/proc/device-tree/soc@0/clock-controller@1800000/#interconnect-cells
	/proc/device-tree/soc@0/clock-controller@39b00000/#interconnect-cells

	Note:	gcc => clock-controller@1800000
		nsscc => clock-controller@39b00000

> Are all providers probing successfully?

Yes. I printed the return value of their probe functions...

	# dmesg | grep probe:
	[    0.037815] --> gcc_ipq9574_probe: return 0
	[    2.078215] --> nss_cc_ipq9574_probe: return 0


> All providers must probe, as there might be paths that cross multiple
> providers and we can't get into sync-state with a topology that is
> only partially initialized.

It does look like both the providers' probe has completed. And,
there aren't any paths that cross providers

	interconnects = <&gcc MASTER_ANOC_PCIE1 &gcc SLAVE_ANOC_PCIE1>,
			<&gcc MASTER_SNOC_PCIE1 &gcc SLAVE_SNOC_PCIE1>;

	interconnects = <&gcc MASTER_ANOC_PCIE3 &gcc SLAVE_ANOC_PCIE3>,
			<&gcc MASTER_SNOC_PCIE3 &gcc SLAVE_SNOC_PCIE3>;

	interconnects = <&gcc MASTER_ANOC_PCIE2 &gcc SLAVE_ANOC_PCIE2>,
			<&gcc MASTER_SNOC_PCIE2 &gcc SLAVE_SNOC_PCIE2>;

	interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
			<&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;

	interconnects = <&nsscc MASTER_NSSNOC_PPE &nsscc SLAVE_NSSNOC_PPE>,
			<&nsscc MASTER_NSSNOC_PPE_CFG &nsscc SLAVE_NSSNOC_PPE_CFG>,
			<&gcc MASTER_NSSNOC_QOSGEN_REF &gcc SLAVE_NSSNOC_QOSGEN_REF>,
			<&gcc MASTER_NSSNOC_TIMEOUT_REF &gcc SLAVE_NSSNOC_TIMEOUT_REF>,
			<&gcc MASTER_MEM_NOC_NSSNOC &gcc SLAVE_MEM_NOC_NSSNOC>,
			<&gcc MASTER_NSSNOC_MEMNOC &gcc SLAVE_NSSNOC_MEMNOC>,
			<&gcc MASTER_NSSNOC_MEM_NOC_1 &gcc SLAVE_NSSNOC_MEM_NOC_1>;

Thanks
Varada




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux