Re: [PATCH V2] arm64: dts: qcom: sc7280: Add nodes for eMMC and SD card

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2021-04-30 02:14, Georgi Djakov wrote:
On 28.04.21 18:13, Doug Anderson wrote:
Hi,

On Wed, Apr 28, 2021 at 3:47 AM <sbhanu@xxxxxxxxxxxxxx> wrote:

On 2021-04-21 01:44, Doug Anderson wrote:
Hi,

On Tue, Apr 20, 2021 at 10:21 AM <sbhanu@xxxxxxxxxxxxxx> wrote:

On 2021-04-15 01:55, Doug Anderson wrote:
Hi,

On Tue, Apr 13, 2021 at 3:59 AM <sbhanu@xxxxxxxxxxxxxx> wrote:

+                                       required-opps =
<&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <1200000
76000>;
+ opp-avg-kBps = <1200000
50000>;
Why are the kBps numbers so vastly different than the ones on sc7180
for the same OPP point. That implies:

a) sc7180 is wrong.

b) This patch is wrong.

c) The numbers are essentially random and don't really matter.

Can you identify which of a), b), or c) is correct, or propose an
alternate explanation of the difference?


We calculated bus votes values for both sc7180 and sc7280 with ICB
tool,
above mentioned values we got for sc7280.

I don't know what an ICB tool is. Please clarify.

Also: just because a tool spits out numbers that doesn't mean it's correct. Presumably the tool could be wrong or incorrectly configured.
We need to understand why these numbers are different.

we checked with ICB tool team on this they conformed as Rennell &
Kodiak
are different chipsets,
we might see delta in ib/ab values due to delta in scaling factors.

...but these numbers are in kbps, aren't they? As I understand it
these aren't supposed to be random numbers spit out by a tool but are supposed to be understandable by how much bandwidth an IP block (like MMC) needs from the busses it's connected to. Since the MMC IP block
on sc7180 and sc7280 is roughly the same there shouldn't be a big
difference in numbers.

Something smells wrong.

Adding a few people who understand interconnects better than I do,
though.


ICB team has re-checked the Rennell ICB tool and they confirmed that some configs were wrong in Rennell ICB tool and they corrected it.With
the new updated Rennell ICB tool below are the values :


Rennell LC:(Sc7180)

opp-384000000 {
               opp-hz = /bits/ 64 <384000000>;
               required-opps = <&rpmhpd_opp_nom>;
               opp-peak-kBps = <5400000 490000>;
               opp-avg-kBps = <6600000 300000>;
};


And now, these values are near to Kodaik LC values:

Kodaik LC:(SC7280)

opp-384000000 {
             opp-hz = /bits/ 64 <384000000>;
             required-opps = <&rpmhpd_opp_nom>;
             opp-peak-kBps = <5400000 399000>;
             opp-avg-kBps = <6000000 300000>;
};

This still isn't making sense to me.

* sc7180 and sc7280 are running at the same speed. I'm glad the
numbers are closer now, but I would have thought they'd be exactly the
same.

* Aren't these supposed to be sensible? This is eMMC that does max
transfer rates of 400 megabytes / second to the external device. You
have bandwidths listed here of 5,400,000 kBps = 5,400,000 kilobytes /
second = 5400 megabytes / second. I can imagine there being some
overhead where an internal bus might need to be faster but that seems
excessive. This is 13.5x!


These numbers are not related to SDCC bandwidth, these are the values
needed for the NOC's to run in nominal voltage corners (internal to
hardware) and
thus it helps SDCC to run in nominal to get required through put
(384MBps).So above calculation mentioned by you is not applicable here.

OK. I guess if everyone else understands this and it's just me that
doesn't then I won't stand in the way. In general, though, the device
tree is supposed to be describing the hardware in a way that makes
sense on its own. It's not a place to just dump in magic numbers.
These numbers must be somehow related to the transfer rate of the SD
card since otherwise they wouldn't scale up with faster card clocks.
Given that these numbers are expressed in "kBps" (since you're storing
them in a property that has "kBps" in the name), I would expect that
these numbers are expressing some type of bandwidth. I still haven't
really understood why you have to scale some bandwidth at over 10x the
card clock speed.

Said another way: you're saying that you need these numbers because
they make a whole bunch of math work out. I'm saying that these aren't
just supposed to be magic numbers. They're supposed to make sense on
their own and you should be able to describe to me how you arrived at
these numbers in a way that I could do the math on my own. Saying "we
plugged this into some program and it spit out these numbers" isn't
good enough.

Agree.


Peak bandwidth is an instantaneous bandwidth used as a floor vote to take care
of latency (in this case for DDR). It is a mechanism to vote for floor
frequency to counter latency as opposed to an actual bandwidth requirement.

So a client could say I need the clock to run @200 MHz and simply take the
bus width times the frequency as the required peak bandwidth vote
(peak bandwidth vote = 200 * bus_width ) and vote for it.

So, we are passing peak bandwidth votes for DDR and CNOC for nom frequencies
from device tree.

SDCC clocks are running at Nom. frequenices, which are power driven by the cx rail.
The same cx rail will also power driven to DDR clocks too.
So, DDR clocks can also scale upto to Nom. frequenices without any extra power drawing
by the cx rail. This will helps to get optimal performance.

So, on doing the math with DDR Nom. frequency (1.3GHz) and also considering the Gerogi points
[If some links between nodes consist of multiple channels,
or there is anything specific to the topology or the hardware platform
(scaling factors, buswidth, etc), this should be handled in the
interconnect provider driver.] we will get values close to 5400000KBps.

same applies to CNOC config path Nom. frequency (403MHz), we will get values close to 1600000KBps.

math used :
peak bandwidth = minimum DDR * effective width; //4bytes for DDR for SC7280.
average bandwidth = is the actual throughput requirement.

By considering the above points, the new b/w values looks as below.


opp-384000000 {
                                opp-hz = /bits/ 64 <384000000>;
                                required-opps = <&rpmhpd_opp_nom>;
                                opp-peak-kBps = <5400000 1600000>;
                                opp-avg-kBps = <390000 0>;
                };


similarly for 100Mhz


opp-100000000 {
                                opp-hz = /bits/ 64 <100000000>;
                                required-opps = <&rpmhpd_opp_low_svs>;
                                opp-peak-kBps = <1800000 400000>;
                                opp-avg-kBps = <100000 0>;
                };


* I can't see how it can make sense that "average" values are higher
than "peak" values.


Here actual peak = peak number * 2
actual average = average number

and this multiplication is taken care by ICC driver, so technically
actual peak is still high than average.

Sorry, but that is really counter-intuitive. Georgi: is that how this
is normally expected to work?

Average bandwidth being higher than peak does not make sense to me.
The numbers in DT should reflect the real bandwidth that is being
requested. If some links between nodes consist of multiple channels,
or there is anything specific to the topology or the hardware platform
(scaling factors, buswidth, etc), this should be handled in the
interconnect provider driver. The goal is to use bandwidth values and
not magic numbers.

Thanks,
Georgi

Sure, we will update this Average bandwidth vote values.



[Index of Archives]     [Linux Memonry Technology]     [Linux USB Devel]     [Linux Media]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux