Re: [PATCH V3 7/8] spi: spi-qcom-qspi: Add interconnect support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Akash,

On 4/10/20 10:31, Akash Asthana wrote:
> Hi Georgi,
> 
> On 4/9/2020 6:47 PM, Georgi Djakov wrote:
>> Hi Akash,
>>
>> On 4/8/20 15:17, Akash Asthana wrote:
>>> Hi Mark, Evan, Georgi,
>>>
>>> On 4/7/2020 4:25 PM, Mark Brown wrote:
>>>> On Tue, Apr 07, 2020 at 03:24:42PM +0530, Akash Asthana wrote:
>>>>> On 3/31/2020 4:53 PM, Mark Brown wrote:
>>>>>>> +    ctrl->avg_bw_cpu = Bps_to_icc(speed_hz);
>>>>>>> +    ctrl->peak_bw_cpu = Bps_to_icc(2 * speed_hz);
>>>>>> I thought you were going to factor this best guess handling of peak
>>>>>> bandwidth out into the core?
>>>>> I can centralize this for SPI, I2C and UART  in Common driver(QUP wrapper)
>>>>> but still for QSPI I have to keep this piece of code as is because It is not
>>>>> child of QUP wrapper(it doesn't use common code).
>>>> Why not?
>>>>
>>>>> I am not sure whether I can move this " Assume peak_bw as twice of avg_bw if
>>>>> nothing is mentioned explicitly" to ICC core because the factor of 2 is
>>>>> chosen randomly by me.
>>>> That's the whole point - if this is just a random number then we may as
>>>> well at least be consistently random.
>>> Can we centralize below logic of peak_bw selection for all the clients to ICC core?
>> I don't think this is a good idea for now, because this is very hardware
>> specific. A scaling factor that works for one client might not work for another.
>>
>> My questions here is how did you decide on this "multiply by two"? I can imagine
>> that the traffic can be bursty on some interfaces, but is the factor here really
>> a "random number" or is this based on some data patterns or performance
>> analysis?
> 
> Factor of 2 is random number.
> 
> We are taking care of actual throughput requirement in avg_bw vote and
> the intention of putting peak as twice of avg is to ensure that if high
> speed peripherals(ex:USB) removes their votes, we shouldn't see any
> latency issue because of other ICC client who don't vote for their BW
> requirement or *actual* BW requirement.

Thanks for clarifying, but is this latency a confirmed issue on real hardware?
I guess voting for twice as average will work, but still wondering wouldn't
it be more appropriate to handle it in the interconnect platform driver instead?
Also why is the other client not voting, can we fix it?

Thanks,
Georgi



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux