Re: interconnects on Tegra

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Thierry & all,

On 10/29/2018 12:18 PM, Jon Hunter wrote:
> 
> On 29/10/2018 10:01, Thierry Reding wrote:
>> On Fri, Oct 26, 2018 at 06:04:08PM +0300, Georgi Djakov wrote:
>>> Hi Jon & all
>>>
>>> On 10/26/2018 04:48 PM, Jon Hunter wrote:
>>>> Hi Georgi,
>>>>
>>>> On 22/10/2018 17:36, Georgi Djakov wrote:
>>>>> Hello Jon and Dmitry,
>>>>>
>>>>> I am working on API [1] which allows consumer drivers to express their
>>>>> bandwidth needs between various SoC components - for example from CPU to
>>>>> memory, from video decoders and DSPs etc. Then the system can aggregate
>>>>> the needed bandwidth between the components and set the on-chip
>>>>> interconnects to the most optimal power/performance profile.
>>>>>
>>>>> I was wondering if there is any DVFS management related to interconnects
>>>>> on Tegra platforms, as my experience is mostly with Qualcomm hardware.
>>>>> The reason i am asking is that i want to make sure that the API design
>>>>> and the DT bindings would work or at least do not conflict with how DFVS
>>>>> is done on Tegra platforms. So do you know if there is any bus clock
>>>>> scaling or dynamic interconnect configuration done by firmware or
>>>>> software in downstream kernels?
>>>>>
>>>>> Thanks,
>>>>> Georgi
>>>>>
>>>>> [1].
>>>>> 	
>>>>
>>>> The downstream kernels do have a bandwidth manager driver for managing
>>>> the memory controller speed/latency, however, I am not sure about the
>>>> actual internal interconnect itself.
>>>>
>>>> Adding the linux-tegra mailing list for visibility.
>>>>
>>>
>>> Thanks! This sounds interesting! I looked at some 4.9 kernel and found
>>> references to some bwmgr functions, which look like they can do some
>>> dynamic bandwidth scaling. Is the full implementation available
>>> publicly? Are there any plans on upstreaming this?
>>
>> Cc'ing Peter who's probably the most familiar with all of this. We've
>> been discussing this on and off for a while now, and the latest
>> concensus was that the existing PM QoS would be a good candidate for an
>> API to use for this purpose, albeit maybe not optimal.

Yes, indeed the PM QoS interface was the closest candidate for extending
when looked at this initially. The problem with that approach was that
it's not suitable for configuring multi-tiered bus topologies and it
might require many changes that might end up conflicting with a lot of
the existing stuff.

>> Generally the way that this works on Tegra is that we have a memory bus
>> clock that can be scaled, so we'd need to aggregate all of the requests
>> for bandwidth and set a memory clock frequency that allows all of those
>> to be met. There are also mechanisms to influence latency for certain
>> requests which can be essential to make sure isochronous clients work
>> properly under memory pressure. I'm not sure we can even get into those
>> situations with the feature set available upstream, but it's certainly
>> something that's important once we do a lot of GPU, display and
>> multimedia in parallel.

Thank you! This sounds very similar to the problem i am trying to solve.
It seems to me that the interconnect API would be a perfect fit for
Tegra too. There is a proposal for device-tree binding to describe the
path between SoC components and am trying collect more information
whether this would be useful for other platforms. If you have any
comments, feel free to respond to the discussion [2].

The general idea is that you use the "interconnects" properties in DT to
describe paths that are used by devices. The interconnect API follows
the consumer-provider model already used by the clock and regulator
frameworks and the usage is similar. Developers need implement
platform-specific provider drivers that know the SoC topology and do
aggregation and low-level hardware configuration. I am not sure what
would be the exact implementation for Tegra platforms, but i expect that
it involves changing the rate of some clocks or writing to some registers.

Thanks,
Georgi

[2]. https://lore.kernel.org/lkml/20180925180215.GA12435@bogus/

>> It looks like the link to your implementation seems to have gotten lost,
>> can you or Jon post it again here for reference? It certainly sounds
>> interesting and something that we'd want to keep a closer eye on for our
>> implementation.
> 
> Sorry its here ...
> 
> https://lore.kernel.org/lkml/20180831140151.13972-1-georgi.djakov@xxxxxxxxxx/
> 
> Jon
> 



[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux