On Tue, Dec 20, 2022 at 09:32:36PM +0530, Sumit Gupta wrote: > Add OPP table and interconnects property required to scale DDR > frequency for better performance. The OPP table has CPU frequency > to per MC channel bandwidth mapping in each operating point entry. > One table is added for each cluster even though the table data is > same because the bandwidth request is per cluster. OPP framework > is creating a single icc path if the table is marked 'opp-shared' > and shared among all clusters. For us the OPP table is same but > the MC client ID argument to interconnects property is different > for each cluster which makes different icc path for all. > > Signed-off-by: Sumit Gupta <sumitg@xxxxxxxxxx> > --- > arch/arm64/boot/dts/nvidia/tegra234.dtsi | 276 +++++++++++++++++++++++ > 1 file changed, 276 insertions(+) > > diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi > index eaf05ee9acd1..ed7d0f7da431 100644 > --- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi > +++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi > @@ -2840,6 +2840,9 @@ > > enable-method = "psci"; > > + operating-points-v2 = <&cl0_opp_tbl>; > + interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>; I dislike how this muddies the water between hardware and software description. We don't have a hardware client ID for the CPU clusters, so there's no good way to describe this in a hardware-centric way. We used to have MPCORE read and write clients for this, but as far as I know they used to be for the entire CCPLEX rather than per-cluster. It'd be interesting to know what the BPMP does underneath, perhaps that could give some indication as to what would be a better hardware value to use for this. Failing that, I wonder if a combination of icc_node_create() and icc_get() can be used for this type of "virtual node" special case. Thierry
Attachment:
signature.asc
Description: PGP signature