Re: interconnects on Tegra

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dmitry & all,

On 10/29/2018 11:56 PM, Dmitry Osipenko wrote:
> On 10/26/18 4:48 PM, Jon Hunter wrote:
>> Hi Georgi,
>> 
>> On 22/10/2018 17:36, Georgi Djakov wrote:
>>> Hello Jon and Dmitry,
>>> 
>>> I am working on API [1] which allows consumer drivers to express
>>> their bandwidth needs between various SoC components - for
>>> example from CPU to memory, from video decoders and DSPs etc.
>>> Then the system can aggregate the needed bandwidth between the
>>> components and set the on-chip interconnects to the most optimal
>>> power/performance profile.
>>> 
>>> I was wondering if there is any DVFS management related to
>>> interconnects on Tegra platforms, as my experience is mostly with
>>> Qualcomm hardware. The reason i am asking is that i want to make
>>> sure that the API design and the DT bindings would work or at
>>> least do not conflict with how DFVS is done on Tegra platforms.
>>> So do you know if there is any bus clock scaling or dynamic
>>> interconnect configuration done by firmware or software in
>>> downstream kernels?
>>> 
>>> Thanks, Georgi
>>> 
>>> [1]. 
>> 
>> The downstream kernels do have a bandwidth manager driver for
>> managing the memory controller speed/latency, however, I am not
>> sure about the actual internal interconnect itself.
>> 
>> Adding the linux-tegra mailing list for visibility.
>> 
>> Cheers Jon
>> 
> 
> Hello,
> 
> I don't know much about newer Tegra's, so will talk mostly about
> older gens.
> 
> There are knobs for adjusting clients performance (priorities and
> such) of the BUS's (AHB for example), but the configuration is kept
> static by both upstream and downstream kernels and I don't know if it
> can be changed dynamically. There is memory clock scaling done by
> downstream kernel, which is not done by the upstream yet.
> 
> Downstream firmware doesn't touch any BUS/clock rate configs, at
> least not on older Tegra's.
> 
> There are no specific requirements for DVFS on Tegra's. Simply set
> regulator voltage high before clock is going up and set low after
> going down. Probably it should be solely up to the providers how to
> implement DVFS.

Yes, it's up to the providers about how to implement it.

> I'm interested in seeing how the [1] is going to look like for the
> "CPU to memory" case. On newer Tegra's there is ACTMON HW
> (drivers/devfreq/tegra-devfreq.c) that tracks memory clients activity
> and notifies kernel when memory clock rate need to go up or can go
> down, but it only gives suggestion about past activity and kernel
> should bump memory clock proactively when necessary. Older Tegra20
> doesn't have ACTMON and hence kernel should govern "CPU to memory"
> requirement purely in software, pretty much only CPU load may give
> suggestion about worst-case memory requirement. Maybe there is room
> for something like CPUMemBW governor.

Thanks for the pointers! This seems to be using be a reactive approach,
where the performance decision is based on past activity. Another option
would be to be proactive and change the performance in advance based on
the actual requests from drivers. The interconnect API allows each
consumer driver to calculate and explicitly say how much average and
peak bandwidth it needs for each path. This bandwidth is aggregated and
set according to the platform-specific implementation.

> There is PM_QOS_MEMORY_BANDWIDTH which was added ~4 years ago to
> upstream and seems didn't get any users. Is "interconnect API" going
> to replace that PM_QOS API? Looks like it has the same intention.

I believe that the use-cases are a bit different. The interconnect API
purpose is to deal with complex multi-tiered bus topologies and support
devices that have multiple links between them.

Thanks,
Georgi



[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux