Re: interconnects on Tegra

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29/10/2018 10:01, Thierry Reding wrote:
> On Fri, Oct 26, 2018 at 06:04:08PM +0300, Georgi Djakov wrote:
>> Hi Jon & all
>>
>> On 10/26/2018 04:48 PM, Jon Hunter wrote:
>>> Hi Georgi,
>>>
>>> On 22/10/2018 17:36, Georgi Djakov wrote:
>>>> Hello Jon and Dmitry,
>>>>
>>>> I am working on API [1] which allows consumer drivers to express their
>>>> bandwidth needs between various SoC components - for example from CPU to
>>>> memory, from video decoders and DSPs etc. Then the system can aggregate
>>>> the needed bandwidth between the components and set the on-chip
>>>> interconnects to the most optimal power/performance profile.
>>>>
>>>> I was wondering if there is any DVFS management related to interconnects
>>>> on Tegra platforms, as my experience is mostly with Qualcomm hardware.
>>>> The reason i am asking is that i want to make sure that the API design
>>>> and the DT bindings would work or at least do not conflict with how DFVS
>>>> is done on Tegra platforms. So do you know if there is any bus clock
>>>> scaling or dynamic interconnect configuration done by firmware or
>>>> software in downstream kernels?
>>>>
>>>> Thanks,
>>>> Georgi
>>>>
>>>> [1].
>>>> 	
>>>
>>> The downstream kernels do have a bandwidth manager driver for managing
>>> the memory controller speed/latency, however, I am not sure about the
>>> actual internal interconnect itself.
>>>
>>> Adding the linux-tegra mailing list for visibility.
>>>
>>
>> Thanks! This sounds interesting! I looked at some 4.9 kernel and found
>> references to some bwmgr functions, which look like they can do some
>> dynamic bandwidth scaling. Is the full implementation available
>> publicly? Are there any plans on upstreaming this?
> 
> Cc'ing Peter who's probably the most familiar with all of this. We've
> been discussing this on and off for a while now, and the latest
> concensus was that the existing PM QoS would be a good candidate for an
> API to use for this purpose, albeit maybe not optimal.
> 
> Generally the way that this works on Tegra is that we have a memory bus
> clock that can be scaled, so we'd need to aggregate all of the requests
> for bandwidth and set a memory clock frequency that allows all of those
> to be met. There are also mechanisms to influence latency for certain
> requests which can be essential to make sure isochronous clients work
> properly under memory pressure. I'm not sure we can even get into those
> situations with the feature set available upstream, but it's certainly
> something that's important once we do a lot of GPU, display and
> multimedia in parallel.
> 
> It looks like the link to your implementation seems to have gotten lost,
> can you or Jon post it again here for reference? It certainly sounds
> interesting and something that we'd want to keep a closer eye on for our
> implementation.

Sorry its here ...

https://lore.kernel.org/lkml/20180831140151.13972-1-georgi.djakov@xxxxxxxxxx/

Jon

-- 
nvpublic



[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux