Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 24, 2024 at 5:45 AM Tudor Ambarus <tudor.ambarus@xxxxxxxxxx> wrote:
>
>
>
> On 10/24/24 2:27 AM, Jassi Brar wrote:
> > Hi Tudor,
> >
>
> Hi, Jassi!
>
> I appreciate that you respond quickly on my emails, thanks!
>
> > On Tue, Oct 22, 2024 at 8:27 AM Tudor Ambarus <tudor.ambarus@xxxxxxxxxx> wrote:
> >>
> >> Hi, Jassi,
> >>
> >> On 10/21/24 5:32 PM, Jassi Brar wrote:
> >>>> On 10/18/24 8:49 AM, Tudor Ambarus wrote:
> >>>
> >>>> The active request is considered completed when TX completes. But it seems
> >>>> that TX is not in direct relation with RX,
> >>>
> >>> TX and RX are assumed independent operations (which they are).
> >>> TX is sending a message/request to the remote successfully. 'Success'
> >>> can be indicated by any of the three methods  TXDONE_BY_{POLLING, IRQ,
> >>> ACK}.
> >>> You seem to assume it should always be an ACK where we receive an
> >>> acknowledgment/response packet, which is not the case.
> >>
> >> My controller driver indeed ties TX to RX and considers the request
> >> completed when the RX completes.
> >>
> > Does your controller require RX or the protocol? I suspect the latter.
>
> The response from remote always happens for the acpm protocol. Same for
> the plain (no acpm or SRAM involved) mailbox controller that I see in
> downstream.
>
> While the response from remote always happens, the RX data is optional.
> Clients can choose whether they want the data from the RX ring or not
> (see `struct exynos_acpm_rx_data`).
>
> For each message that is sent on the TX ring, it is expected that the
> remote consumes it. The remote gets the message from the TX queue,
> advances the rear index of the TX ring, processes the request, writes
> the response message (if any) on the linux RX ring and advances the
> front index of the linux RX ring.
>
> > Anyway, the former is also supported by TXDONE_BY_ACK already.
>
> If we want to focus on when TX is done and really be strict about it,
> then for my case TX can be considered done when the remote consumes it.
> I need to poll and check when the linux TX ring rear index is moved by
> the remote. Other option is to poll the IRQ status register of the
> remote to see when the request was consumed. So maybe TXDONE_BY_POLL is
> more accurate?
> TX can also be considered done after what we write to TX ring hits the
> endpoint-SRAM, thus neither of these flags needed.
>
> As a side nodeto IRQs, the acpm protocol allows channels to work either
> in polling or in IRQ mode. I expect in the future we'll need to specify
> the done method per channel and not per controller. IRQs are not used in
> the downstream, thus I didn't touch them in this proposal.
>
> >
> >>>
> >>>> Is there a specific driver that I can look at in order to understand the
> >>>> case where RX is not tied to TX?
> >>>
> >>>  Many...
> >>>  * The remote end owns sensors and can asynchronously send, say
> >>> thermal, notifications to Linux.
> >>>  * On some platform the RX may be asynchronous, that is sent later
> >>> with some identifying tag of the past TX.
> >>>  * Just reverse the roles of local and remote - remote sends us a
> >>> request (RX for us) and we are supposed to send a response (TX).
> >>
> >> I was hoping for a name of a driver, but I guess I can find them out
> >> eventually.
> >>
> > Do these usecases seem impossible to you? Many users are not upstream
>
> They seem fine, I just wanted to see the implementation and decide
> whether the request approach can be applied to them or not. I think it can.
>
> > that we care
> > about as long as we are not making some corrective change.>
> >
> >>>
> >>>> Also, if you think there's a better way to enable controllers to manage
> >>>> their hardware queues, please say.
> >>>>
> >>> Tying RX to TX has nothing to do with hardware queues. There can be a
> >>
> >> Right, I agree.
> >>
> >>> hardware queue and the protocol can still be
> >>> "local-to-remote-broadcast".
> >>>
> >>> While I don't think we need the "Rx is in relation to some past Tx"
> >>> api, I am open to ideas to better utilize h/w queues.
> >>>
> >>> The h/w TX queue/fifo may hold, say, 8 messages while the controller
> >>> transmits them to the remote. Currently it is implemented by
> >>> .last_tx_done() returning true as long as fifo is not full.
> >>> The other option, to have more than one TX in-flight, only complicates
> >>> the mailbox api without fetching any real benefits because the
> >>> protocol would still need to handle cases like Tx was successful but
> >>> the remote rejected the request or the Rx failed on the h/w fifo
> >>> (there could be rx-fifo too, right). Is where I am at right now.
> >>>
> >> No worries, I'm confident we'll reach a conclusion.
> >>
> >> It's true I implemented just my use case, but that doesn't mean that the
> >> "request" approach can't be extended for current users.
> >>
> > Sorry, I am not sure we should make the api dance around your usecase.
>
> No worries, it's fine to disagree.
>
> > If your usecase is not currently handled, please let me know. We can
> > discuss that.
>
> It's not handled. I have a list of requirements I have to fulfill which
> are not covered by the mailbox core:
>
> 1/ allow multiple TX in-flight. We need to let the controller handle its
> hardware queue, otherwise the hardware queue has no sense at all.
>
As I said this one is currently handled by assuming TX-done by
depositing in the h/w queue/fifo.
You will have the same perf as with your attempt to have "multiple
in-flight" while neither of these approaches handles in-flight
failures. We can discuss this.

> 2/ allow to tie a TX to a RX. I need to know to what TX the response
> corresponds to.
> 3/ allow multiple clients to the same channel. ACPM allows this. Support
> would have come as an extra patch.
>
These are nothing new. A few other platforms too have shared channels
and that is implemented above the mailbox.

> 4/ allow polling and IRQ channels for the same mailbox controller (not
> urgent).
>
It is very easy to populate them as separate controllers.

> I guess that we agree that in order to allow multiple TX in-flight we
> need a completion struct per message/request and not per channel. But we
> disagree on the ability to tie a TX to a RX.
>
> How can I move forward with this?
>
As I already explained, to tie RX to a TX is very protocol specific
and should be done in the layer above the mailbox api.

Thanks.





[Index of Archives]     [Linux SoC Development]     [Linux Rockchip Development]     [Linux for Synopsys ARC Processors]    
  • [Linux on Unisoc (RDA Micro) SoCs]     [Linux Actions SoC]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Linux SCSI]     [Yosemite News]

  •   Powered by Linux