Re: [PATCH 15/18] ARM: dts: qcom: apq8064: provide voltage scaling tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 12, 2023 at 04:33:09PM +0300, Dmitry Baryshkov wrote:
> On 12/06/2023 12:01, Stephan Gerhold wrote:
> > On Mon, Jun 12, 2023 at 08:39:19AM +0300, Dmitry Baryshkov wrote:
> > > APQ8064 has 4 speed bins, each of them having from 4 to 6 categorization
> > > kinds. Provide tables necessary to handle voltage scaling on this SoC.
> > > 
> > > Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@xxxxxxxxxx>
> > > ---
> > >   arch/arm/boot/dts/qcom-apq8064.dtsi | 1017 +++++++++++++++++++++++++++
> > >   1 file changed, 1017 insertions(+)
> > > 
> > > diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
> > > index 4ef13f3d702b..f35853b59544 100644
> > > --- a/arch/arm/boot/dts/qcom-apq8064.dtsi
> > > +++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
> > > @@ -49,6 +49,9 @@ CPU0: cpu@0 {
> > >   			clocks = <&kraitcc KRAIT_CPU_0>;
> > >   			clock-names = "cpu";
> > >   			clock-latency = <100000>;
> > > +			vdd-mem-supply = <&pm8921_l24>;
> > > +			vdd-dig-supply = <&pm8921_s3>;
> > > +			vdd-core-supply = <&saw0_vreg>;
> > >   			interconnects = <&kraitcc MASTER_KRAIT_L2 &kraitcc SLAVE_KRAIT_L2>;
> > >   			operating-points-v2 = <&cpu_opp_table>;
> > >   			#cooling-cells = <2>;
> > > @@ -66,6 +69,9 @@ CPU1: cpu@1 {
> > >   			clocks = <&kraitcc KRAIT_CPU_1>;
> > >   			clock-names = "cpu";
> > >   			clock-latency = <100000>;
> > > +			vdd-mem-supply = <&pm8921_l24>;
> > > +			vdd-dig-supply = <&pm8921_s3>;
> > > +			vdd-core-supply = <&saw1_vreg>;
> > >   			interconnects = <&kraitcc MASTER_KRAIT_L2 &kraitcc SLAVE_KRAIT_L2>;
> > >   			operating-points-v2 = <&cpu_opp_table>;
> > >   			#cooling-cells = <2>;
> > > @@ -83,6 +89,9 @@ CPU2: cpu@2 {
> > >   			clocks = <&kraitcc KRAIT_CPU_2>;
> > >   			clock-names = "cpu";
> > >   			clock-latency = <100000>;
> > > +			vdd-mem-supply = <&pm8921_l24>;
> > > +			vdd-dig-supply = <&pm8921_s3>;
> > > +			vdd-core-supply = <&saw2_vreg>;
> > >   			interconnects = <&kraitcc MASTER_KRAIT_L2 &kraitcc SLAVE_KRAIT_L2>;
> > >   			operating-points-v2 = <&cpu_opp_table>;
> > >   			#cooling-cells = <2>;
> > > @@ -100,6 +109,9 @@ CPU3: cpu@3 {
> > >   			clocks = <&kraitcc KRAIT_CPU_3>;
> > >   			clock-names = "cpu";
> > >   			clock-latency = <100000>;
> > > +			vdd-mem-supply = <&pm8921_l24>;
> > > +			vdd-dig-supply = <&pm8921_s3>;
> > > +			vdd-core-supply = <&saw3_vreg>;
> > >   			interconnects = <&kraitcc MASTER_KRAIT_L2 &kraitcc SLAVE_KRAIT_L2>;
> > >   			operating-points-v2 = <&cpu_opp_table>;
> > >   			#cooling-cells = <2>;
> > > @@ -132,6 +144,81 @@ cpu_opp_table: opp-table-cpu {
> > >   		opp-384000000 {
> > >   			opp-hz = /bits/ 64 <384000000>;
> > >   			opp-peak-kBps = <384000>;
> > > +			opp-microvolt-speed0-pvs0 = <1050000 1050000 1150000>,
> > > +						    <950000 950000 1150000>,
> > > +						    <950000 950000 975000>;
> > 
> > I think this won't result in the correct switch order without making
> > some changes to the OPP core. In _set_opp() the OPP core does
> > 
> > 	/* Scaling up? Configure required OPPs before frequency */
> > 	if (!scaling_down) {
> > 		_set_required_opps();
> > 		_set_opp_bw();
> > 		opp_table->config_regulators();
> > 	}
> > 
> > 	opp_table->config_clks();
> > 
> > 	/* Scaling down? Configure required OPPs after frequency */
> > 	if (scaling_down) {
> > 		opp_table->config_regulators();
> > 		_set_opp_bw();
> > 		_set_required_opps();
> > 	}
> > 
> > Since the "bandwidth" for the L2 cache is set before the regulators
> > there is a short window where the L2 clock is running at a high
> > frequency with too low voltage, which could potentially cause
> > instability. On downstream this seems to be done in the proper order [1].
> > 
> > I'm not sure if the order in the OPP core is on purpose. If not, you
> > could propose moving the config_regulators() first (for scaling up)
> > and last (for scaling down). This would resolve the problem.
> 
> Nice catch, I missed this ordering point.
> 
> > 
> > The alternative that I've already argued for on IRC in #linux-msm a
> > couple of days ago would be to give the L2 cache (here: "interconnect")
> > an own OPP table where it can describe its voltage requirements,
> > independent from the CPU. That way the icc_set_bw() would be guaranteed
> > to apply the correct voltage before adjusting the L2 cache clock. It
> > looks like the "l2_level" voltages for vdd_dig and vdd_mem are not
> > speedbin/PVS-specific [2] so this would also significantly reduce the DT
> > size, since you wouldn't need to repeat the same vdd_dig/vdd_mem
> > voltages for all of them.
> 
> Yes. I fact our discussion triggered me to do this patchset.
> 
> So, another option would be to have something like the following snippet. Do
> you know if we are allowed to squish additional data into the L2 cache DT
> node?
> 

I suspect no one has tried this before, so only the DT maintainers could
answer this. I would say that it just follows the existing design of
clocks/-supply/OPPs on the CPU nodes. vdd-mem-supply isn't a property of
the CPU, it's a property of the L2 cache so it actually fits better there.

I think the more controversial questions might be:

  - Is a L2 cache really an "interconnect"? I suppose one could argue it
    connects multiple CPU cores to a cluster (similar how a CCI connects
    multiple clusters to a system).

  - What would bind to the l2-cache node? A separate driver? Does that
    work if it sits below the /cpus node?

Thanks,
Stephan



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux