Re: [PATCH 5/7] dt-bindings: remoteproc: qcom,pas: Add hwlocks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/05/2024 21:23, Bjorn Andersson wrote:
> On Thu, May 23, 2024 at 08:15:54AM +0200, Krzysztof Kozlowski wrote:
>> On 22/05/2024 19:50, Bjorn Andersson wrote:
>>>>>>>
>>>>>>> We did consider tying this to the SMEM instance, but the entitiy 
>>>>>>> relating to firmware is the remoteproc instance.
>>>>>>
>>>>>> I still do not understand why you have to add hwlock to remoteproc, even
>>>>>> though it is not directly used. Your driver problem looks like lack of
>>>>>> proper driver architecture - you want to control the locks not from the
>>>>>> layer took the lock, but one layer up. Sorry, no, fix the driver
>>>>>> architecture.
>>>>>>
>>>>>
>>>>> No, it is the firmware's reference to the lock that is represented in
>>>>> the remoteproc node, while SMEM deals with Linux's reference to the lock.
>>>>>
>>>>> This reference would be used to release the lock - on behalf of the
>>>>> firmware - in the event that the firmware held it when it
>>>>> stopped/crashed.
>>>>
>>>> I understood, but the remoteproc driver did not acquire the hardware
>>>> lock. It was taken by smem, if I got it correctly, so you should poke
>>>> smem to bust the spinlock.
>>>>
>>>
>>> The remoteproc instance is the closest representation of the entity that
>>> took the lock (i.e. the firmware). SMEM here is just another consumer of
>>> the same lock.
>>>
>>>> The hwlock is not a property of remote proc, because remote proc does
>>>> not care, right? Other device cares... and now for every smem user you
>>>> will add new binding property?
>>>>
>>>
>>> Right, the issue seen relates to SMEM, because the remote processor (not
>>> the remoteproc driver) took the lock.
>>>
>>>> No, you are adding a binding based on your driver solution.
>>>
>>> Similar to how hwspinlocks are used in other platforms (e.g. TI) the
>>> firmware could take multiple locks, e.g. to synchronize access to other
>>> shared memory mechanism (i.e. not SMEM). While I am not aware of such
>>> use case today, my expectation was that in such case we just list all
>>> the hwlocks related to the firmware and bust those from the remoteproc
>>> instance.
>>>
>>> Having to export APIs from each one of such drivers and make the
>>> remoteproc identify the relevant instances and call those APIs does
>>> indeed seem inconvenient.
>>> SMEM is special here because it's singleton, but this would not
>>> necessarily be true for other cases.
>>
>> I don't think that exporting such API is unreasonable, but quite
>> opposite - expected. The remote processor crashed, so the remoteproc
>> driver is supposed to call some sort of smem_cleanup() or
>> smem_cleanup_on_crash() and call would bust/release the lock. That way
>> lock handling is encapsulated entirely in one driver which already takes
>> and releases the lock.
>>
> 
> I don't agree.
> 
> SMEM does indeed acquire and release the same, shared, lock. But the
> SMEM driver instance on our side is not involved in taking the lock for
> the firmware.
> 
> There exist an equivalent SMEM driver instance in the firmware that
> crashed and that's the thing that needs to be released.

Then please include relevant explanation in the commit msg.

> 
> 
> We're also not tearing down, or cleaning up anything in our SMEM
> instance. It is simply "when remoteproc id N died, check if N is holding
> the lock and if so force a release of the lock - so that others can grab
> it".
> 
>> Just like freeing any memory. remoteproc driver does not free other
>> driver's memory only because processor crashed.
>>
> 
> That's a good comparison. Because when the firmware running on the
> remote processor crashes, it is indeed the job of the remoteproc driver
> to clean up the memory allocated to run the remote processor.


Best regards,
Krzysztof





[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux