Re: interconnects on Tegra

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sanjay,

On 12/4/18 23:17, Sanjay Chandrashekara wrote:
> Hi Georgi,
> 
> Please find my question inline marked with [Sanjay]
> 
> -----Original Message----- From: Georgi Djakov
> <georgi.djakov@xxxxxxxxxx> Sent: Monday, November 19, 2018 7:38 AM 
> To: Krishna Sitaraman <ksitaraman@xxxxxxxxxx>; Thierry Reding
> <thierry.reding@xxxxxxxxx> Cc: Jonathan Hunter
> <jonathanh@xxxxxxxxxx>; Peter De Schrijver <pdeschrijver@xxxxxxxxxx>;
> Dmitry Osipenko <digetx@xxxxxxxxx>; linux-tegra
> <linux-tegra@xxxxxxxxxxxxxxx>; Sanjay Chandrashekara
> <sanjayc@xxxxxxxxxx>; Linux PM list <linux-pm@xxxxxxxxxxxxxxx> 
> Subject: Re: interconnects on Tegra
> 
> Hi Thierry and Krishna,
> 
> On 11/17/18 01:16, Krishna Sitaraman wrote:
>> My responses inline. (I am using outlook and currently my responses
>>  are marked with [Krishna S].  In near future I will switch to a
>> more linux friendly indentation.  So please bear with me till then.
>> )
>> 
>> -----Original Message----- From: Thierry Reding 
>> <thierry.reding@xxxxxxxxx> Sent: Friday, November 16, 2018 3:54 AM 
>> To: Georgi Djakov <georgi.djakov@xxxxxxxxxx> Cc: Krishna Sitaraman
>>  <ksitaraman@xxxxxxxxxx>; Jonathan Hunter <jonathanh@xxxxxxxxxx>;
>> Peter De Schrijver <pdeschrijver@xxxxxxxxxx>; Dmitry Osipenko 
>> <digetx@xxxxxxxxx>; linux-tegra <linux-tegra@xxxxxxxxxxxxxxx>;
>> Sanjay Chandrashekara <sanjayc@xxxxxxxxxx>; Linux PM list 
>> <linux-pm@xxxxxxxxxxxxxxx> Subject: Re: interconnects on Tegra
>> 
>> On Fri, Nov 16, 2018 at 01:06:46PM +0200, Georgi Djakov wrote:
>>> Hi,
>>> 
>>> Adding linux-pm list.
>>> 
>>> On 11/15/18 18:47, Thierry Reding wrote:
>>>> On Thu, Nov 15, 2018 at 01:17:58AM +0000, Krishna Sitaraman 
>>>> wrote:
>>>>> Thierry, thanks for looping us in. Reading this thread, I
>>>>> believe Peter has given a brief overview of how we manage
>>>>> memory bus clock along with latency & priority information to
>>>>> the memory subsystem for isochronous clients.  I want to add
>>>>> to this: -	The total amount of isochronous bandwidth
>>>>> achievable is limited and we have to arbitrate the available
>>>>> iso bandwidth at runtime among the clients.  So our tegra
>>>>> framework provides a mechanism for clients to check and then
>>>>> lock a particular iso bandwidth before attempting to switch
>>>>> to the desired mode which uses it. -	The tegra framework also
>>>>> provides mechanism for the isochronous clients to set a
>>>>> latency and/or priority information to the memory arbitration
>>>>> hardware.
>>>>> 
>>>>> The interconnect framework seems to be a good fit and we
>>>>> might be able to make use of it.  However there are certain
>>>>> additional functionality we want to request or suggest that
>>>>> can help in using the interconnects.
>>> 
>>> Thanks for the comments!
>>> 
>>>>> 
>>>>> Listing them out here:
>>>>> 
>>>>> 1.	Isochronous bandwidth manager needs to provide feedback
>>>>> to the clients (consumers) to know if a particular iso
>>>>> bandwidth request is possible or not before clients can make
>>>>> a definite switch. Example Display wanting to know if a mode
>>>>> is possible before switch to the new configuration.  The
>>>>> interconnect framework needs a method for provider to give a
>>>>> feedback to the requesting consumer.  A check or is_possible
>>>>> request before actual set request.
>>>> 
>>>> How would we handle races with other bandwidth requests? Let's
>>>> say the display driver requests a check for a certain
>>>> bandwidth, then gets back a positive reply, after which it
>>>> would go make the actual request, but at that point somebody
>>>> else might have already requested additional bandwidth, in turn
>>>> making the actual request fail.
>>>> 
>>>> Would it perhaps be a good idea to just make the actual request
>>>> fail if there's not enough bandwidth and allocate the requested
>>>> bandwidth if it is available? That way after the consumer gets
>>>> back a positive reply to the request it knows that it can use
>>>> it. On the other hand if the reply was negative it can
>>>> downgrade to some other mode and retry.
>>>> 
>>>> Any solution that involves two separate steps would probably
>>>> require some way of locking the provider until the actual
>>>> request has been satisfied, right?
>>> 
>>> Agree with Thierry.
>>> 
>>> So icc_set() propagates any non-zero return value back to the
>>> caller. The interconnect platform driver could return -EAGAIN or
>>> -EBUSY or any other value (remaining bandwidth?) to indicate that
>>> the bandwidth is not available. Then this could be handled by the
>>> consumer driver (e.g display driver will retry with lower
>>> resolution). Does this sound like an option for you?
>>> 
>> 
>> [Krishna S] We do handle the race condition. Internally, in tegra,
>> we call these functions as reserve() and realize() calls.   During
>> a reserve() call, the client requests for a certain iso bandwidth
>> and if it's available the iso bandwidth manager reserves the
>> requested bandwidth for this client and responds with a positive
>> reply. The reserved bandwidth is treated as committed and is not
>> allocated to other clients.  When the client makes the realize()
>> call, that's when the actual memory parameters are modified
>> according to the previously reserved value.    The critical part of
>> the reserve() function is protected by locks in order to avoid race
>> condition.  If the icc_set() can propagate non-zero return value,
>> that is a good option to consider.
> 
> Ok, good. Thanks for confirming!
> 
> [Sanjay]: Yes, using the value returned by icc_set() looks like a
> good solution. However, when icc_set() fails, we should undo what
> aggregate_requests(node) had done earlier. Otherwise the "icc_node's
> (avg_bw, peak_bw)" and "interconnect summary debugfs node" would not
> be accurate. Please let me know your thoughts on this.

This makes sense to me. I will prepare a patch to revert
to the previous state if any link fails to apply the new bandwidth
configuration.

Thanks,
Georgi



[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux