Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
On Mon, May 13, 2024, Michael Grzeschik wrote:
On Wed, Apr 24, 2024 at 02:28:10AM +0000, Thinh Nguyen wrote:
> On Tue, Apr 23, 2024, Avichal Rakesh wrote:
> >
> >
> > On 4/23/24 07:25, Michael Grzeschik wrote:
> > > Ccing:
> > >
> > > Michael Riesch <michael.riesch@xxxxxxxxxxxxxx>
> > > Thinh Nguyen <Thinh.Nguyen@xxxxxxxxxxxx>
> > >
> > > On Mon, Apr 22, 2024 at 05:21:09PM -0700, Avichal Rakesh wrote:
> > >> On 4/21/24 16:25, Michael Grzeschik wrote:
> > >>> On Tue, Apr 09, 2024 at 11:24:56PM +0200, Michael Grzeschik wrote:
> > >>>> This patch series is improving the size calculation and allocation
> > >>>> of the uvc requests. Using the currenlty setup frame duration of the
> > >>>> stream it is possible to calculate the number of requests based on the
> > >>>> interval length.
> > >>>
> > >>> The basic concept here is right. But unfortunatly we found out that
> > >>> together with Patch [1] and the current zero length request pump
> > >>> mechanism [2] and [3] this is not working as expected.
> > >>>
> > >>> The conclusion that we can not queue more than one frame at once into
> > >>> the hw led to [1]. The current implementation of zero length reqeusts
> > >>> which will be queued while we are waiting for the frame to finish
> > >>> transferring will enlarge the frame duration. Since every zero-length
> > >>> request is still taking up at least one frame interval of 125 us.
> > >>
> > >> I haven't taken a super close look at your patches, so please feel free
> > >> to correct me if I am misunderstanding something.
> > >>
> > >> It looks like the goal of the patches is to determine a better number
> > >> and size of usb_requests from the given framerate such that we send exactly
> > >> nreqs requests per frame where nreqs is determined to be the exact number
> > >> of requests that can be sent in one frame interval?
> > >
> > > It does not need to be the exact time, actually it may not be exact.
> > > Scattering the data over all requests would not leave any headroom for
> > > any latencies or overhead.
> >
> > IIUC, patch 3/3 sets the number of requests to frameinterval / 125 us,
> > which gives us the number of requests we can send in exactly one frame interval,
> > and then sets the size of the request as max framesize / nreq, which means the
> > frames will be evenly divided up into all available requests (with a little
> > fuzz factor here and there).
> >
> > This effectively means that (assuming no other delays) one frame will take
> > ~one frameinterval to be transmitted?
> >
> > >
> > >> As the logic stands, we need some 0-length requests to be circulating to
> > >> ensure that we don't miss ISOC deadlines. The current logic unconditionally
> > >> sends half of all allocated requests to be circulated.
> > >>
> > >> With those two things in mind, this means than video_pump can at encode
> > >> at most half a frame in one go, and then has to wait for complete
> > >> callbacks to come in. In such cases, the theoretical worst case for
> > >> encode time is
> > >> 125us * (number of requests needed per frame / 2) + scheduling delays
> > >> as after the first half of the frame has been encoded, the video_pump
> > >> thread will have to wait 125us for each of the zero length requests to
> > >> be returned.
> > >>
> > >> The underlying assumption behind the "queue 0-length requests" approach
> > >> was that video_pump encodes the frames in as few requests as possible
> > >> and that there are spare requests to maintain a pressure on the
> > >> ISOC queue without hindering the video_pump thread, and unfortunately
> > >> it seems like patch 3/3 is breaking both of them?
> > >
> > > Right.
> > >
> > >> Assuming my understanding of your patches is correct, my question
> > >> is: Why do we want to spread the frame uniformly over the requests
> > >> instead of encoding it in as few requests as possible. Spreading
> > >> the frame over more requests artificially increases the encode time
> > >> required by video_pump, and AFAICT there is no real benefit to it?
> > >
> > > Thinh gave me the advise that it is better to use the isoc stream
> > > constantly filled. Rather then streaming big amounts of data in the
> > > beginning of an frameinterval and having then a lot of spare time
> > > where the bandwidth is completely unsused.
> > >
> > > In our reallife scenario streaming big requests had the impact, that
> > > the dwc3 core could not keep up with reading the amount of data
> > > from the memory bus, as the bus is already under heavy load. When the
> > > HW was then not able to transfer the requested and actually available
> > > amount of data in the interval, the hw did give us the usual missed
> > > interrupt answer.
> > >
> > > Using smaller requests solved the problem here, as it really was
> > > unnecessary to stress the memory and usb bus in the beginning as
> > > we had enough headroom in the temporal domain.
> >
> > Ah, I see. This was not a consideration, and it makes sense if USB
> > bus is under contention from a few different streams. So the solution
> > seems to be to spread the frame of as many requests as we can transmit
> > in one frameinterval?
> >
> > As an experiment, while we wait for others to respond, could you try
> > doubling (or 2.5x'ing to be extra safe) the number of requests allocated
> > by patch 3/3 without changing the request's buffer size?
> >
> > It won't help with the error reporting but should help with ensuring
> > that frames are sent out in one frameinterval with little to no
> > 0-length requests between them.
> >
> > The idea is that video_pump will have enough requests available to fully
> > encode the frame in one burst, and another frame's worth of request will be
> > re-added to req_free list for video_pump to fill up in the time that the next
> > frame comes in.
> >
> > >
> > > Which then led to the conclusion that the number of needed requests
> > > per image frame interval is calculatable since we know the usb
> > > interval length.
> > >
> > > @Thinh: Correct me if I am saying something wrong here.
>
> Right, if you max out the data rate per uframe, there's less opportunity
> for the host to schedule everything for that interval (e.g. affected
> from other endpoint/device traffics, link commands etc). It also
> increases the latency of DMA. In many cases, many other vendor hosts
> can't handle 48KB/uframe for SuperSpeed and 96KB/uframe for SuperSpeed
> Plus. So, you'd need to test your platform find the optimal request size
> so it can work for most hosts.
>
> > >
> > >>> Therefor to properly make those patches work, we will have to get rid of
>
> Sorry if I may have missed the explaination, but why do we need to rid
> of this?


The uvc_video gadget is queueing requests with ep_queue whenever they
are prepared. However for uvc we may not send EOF to the host until
we know that the frame was transmitted correct or wrong.

To ensure this the gadget is waiting for the last request to be
completed from dwc3. Until this request was not received, the current
workflow is to enqueue zero-length requests into the dwc3 hw. With that,
the final EOF request for the frame will be transmitted after the
zero-length requests have passed the hw. (They have no data, but they
still take one frameinterval durtion). This sparsed frame with
zero-requests inbetween will interfere with the precalculation for
request data we fill every request with based on the expected frame
duration.

I know this seems very interlocked. It is very complex indeed. Tell
me if you still have questions and I will come up with some more
details to the current uvc_video driver.

> > >>> the zero length pump mechanism again and make sure that the whole
> > >>> business logic of what to be queued and when will only be done in the
> > >>> pump worker. It is possible to let the dwc3 udc run dry, as we are
> > >>> actively waiting for the frame to finish, the last request in the
> > >>> prepared and started list will stop the current dwc3 stream and  for
> > >>> no underruns will occur with the next ep_queue.
> > >>
> > >> One thing to note here: The reason we moved to queuing 0-length requests
> > >> from complete callback was because even with realtime priority, video_pump
> > >> thread doesn't always meet the ISOC queueing cadence. I think stopping and
> > >> starting the stream was briefly discussed in our initial discussion in
> > >> https://urldefense.com/v3/__https://lore.kernel.org/all/20230419001143.pdxflhzyecf4kvee@xxxxxxxxxxxx/__;!!A4F2R9G_pg!ZmfvrPq4rs7MIhxNrrEqmgGrlYTJ12WgdzaqQhfEehKfjKqxPr2bC1RzUqaa9tvdBtAvXdyK2GpxYzvslpV6$
> > >> and Thinh mentioned that dwc3 controller does it if it detects an underrun,
> > >> but I am not sure if starting and stopping an ISOC stream is good practice.
>
> There's a workaround specific for UVC in dwc3 to "guess" when underrun
> happen. It's not foolproof. dwc3 should not need to do that.
>
> Isoc data is periodic and continuous. We should not expect this
> unconventional re-synchronization.

I think we have to discuss what is ment by resynchronization here. If
the trb ring buffer did run dry and the software is aware of this
(elemnt in the started and prepared list) then the interrupt handler
already is calling End Stream Command.

The driver only aware of this when the controller tells it, which may be
already too late.

In our special case there should not be any too late any more. Since we
ensure that all requests will be enqueued for one transfer (which will
represent one frame) in time and we are not dependent on the complete
handler for nothing else than telling the uvc driver that the last
request came back or if there was some error in the current active
frame.

As already stated we also have to wait with enqueueing the next frame
to the hardware and only are allowed to enqueue one frame at a time.
Otherwise it is not possible to tell the host if the frame was broken or
not.

I have the following scheme in my mind. It is simplified to take frames
of only 4 requests into account. (>80 chars warning)


frameinterval:                |       125 us       |       125 us       |       125 us       |       125 us       |       125 us       |       125 us       |       125 us       |
                              |                    |                    |                    |                    |                    |                    |                    |
pump thread:   queue          |rqA1 rqA2 rqA3 rqA4'|                    |                    |                    |                    |rqB0 rqB1 rqB2 rqB3 |rqB4'               |
irq  thread:   complete       |                    |rqA1                |rqA2                |rqA3                |rqA4'               |                    |rqB0                | rqB1
qbuf thread:   encode         |rqB1 rqB2 rqB3 rqB4'|                    |                    |                    |                    |rqA1 rqA2 rqA3 rqA4'|                    |

dwc3 thread:   Hardware                            < Start Transfer                                                       End Transfer >                    < Start Transfer      ....

legend:
- rq'  : last request of a frame
- rqB0 : first request of the next transfer with no payload but the header only
         telling the host that the last frame was ok/broken

assumption:

- pump thread is never interrupted by a kernel thread but only by some short running irq
- if one request comes back with -EXDEV the rest of the enqueued requests should be flushed

In the no_interrupt case we would also only generate the interrupt for
the last request and giveback all four requests in the last interval.
This should still work fine.

We also only start streaming when one frame is totally available to be
enqueued in one run. So in case frames with rqA and rqB both did come back
with errors the start of the next frame will only begin after the next
frame was completely and fully encoded.

When the stream is stopped, what implications does this have on the bus?

When the Endpoint is enabled, will the hardware then send zero-length
requests on its own?

For isoc endpoint IN, yes. If the host requests for isoc data IN while
no TRB is prepared, then the controller will automatically send 0-length
packet respond.

Perfect! This will help a lot and will make active queueing of own
zero-length requests run unnecessary.

With the next ep_queue we start another stream and when we keep up with
this stream there is no underruns, right?

I picture this scenario in my mind:

thread 1: uvc->queue_buf is called:
  - we encode the frame buffer data into all available requests
    and put them into the per uvc_buffer perpared list
    (as we precalculated the amount of requests properly to the expected
     frame duration and buffer size there will be enough requests
     available)
  - wake up the pump thread

thread 2: pump_worker is triggered
  - take all requests from the prepared available buffer and enqueue them
    into the hardware
    (The pump worker is running with while(1) while it finds requests in
     the per buffer prepared list) and therefor will have a high chance
     to finish the pumping for one complete frame.
  - check for any errors reported from the complete handlers
    - on error
      - stop enqueing new requests from current frame
      - wait for the last request from errornous frame has returned
  - only start pumping new requests from the next buffer when the last
    request from the active frame has finished
  - In the beginning of the next frame send one extra request with
    EOF/ERR tag so the host knows that the last one was ok or not.

thread 3: complete handler (interrupt)
  - give back the requests into the empty_list
  - report EXDEV and errors
  - wake up the pump thread

With this method we will continously drain the hw trb stream of the dwc3
controller per frame and therefor will not shoot into one window where
the current stream could be missed. With the data spreading over the
many requests we also avoid the missed requests when the DMA was to
slow.


This sounds good.

As long as we can maintain more than X number of requests enqueued to
accomodate for the worst latency, then we can avoid underrun. The driver
should monitor how many requests are enqueued and hopefully can keep up
the queue with zero-length requests.

Except I totally misunderstood something or did oversimplify to much,
the above explanation should run this unnecessary.

Although we are able to track the amount of enqueued requests in the udc
driver and compare that with the amount of completed requests.

Also we have the usb_gadget_frame_number callback to the udc controller
to ask in which frame it is operating at the moment. This way we would
be able to calculate not to enqueue requests into a transfer that did
not come back yet completely but would run into missed transfers.

Michael

--
Pengutronix e.K.                           |                             |
Steuerwalder Str. 21                       | http://www.pengutronix.de/  |
31137 Hildesheim, Germany                  | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Media]     [Linux Input]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Old Linux USB Devel Archive]

  Powered by Linux