On Fri, Nov 10, 2023, Thinh Nguyen wrote: > Hi, > > On Fri, Nov 10, 2023, Michael Grzeschik wrote: > > One misconception of queueing request to the usb gadget controller is, > > that the amount of data that one usb_request is representing is the same > > as the hardware is able to transfer in one interval. > > > > This exact idea applies to the uvc_video gadget that assumes that the > > request needs to have the size as the gadgets bandwidth parameters are > > configured with. (maxpacket, multiplier and burst) > > > > Although it is actually no problem for the hardware to queue a big > > request, in the case of isochronous transfers it is impractical to queue > > big amount of data per request to the hardware. Especially if the > > necessary bandwidth was increased on purpose to maintain high amounts of > > data. > > > > E.g. in the dwc3-controller it has the negative impact that the endpoint > > FIFO will not be filled fast enough by the internal hardware engine. > > Since each transfer buffer (TRB) represents one big request, the > > hardware will need a long time to prefill the hardware FIFO. This can be > > avoided by queueing more smaller requests which will then lead to > > smaller TRBs which the hardware engine can keep up with. > > Just want to clarify here to avoid any confusion, the hardware TX FIFO > size is relatively small, usually can be smaller than the TRB. It should > be fine whether the TRB is larger or smaller than the FIFO size. The > hardware does not "prefill" the FIFO. It just fills whichever TRB it's > currently processing (I think you may be mixing up with TRB cache). > > The performance impact from this change is to reduce the USB bandwidth > usage. The current calculation in uvc function can use 48KB/uframe for > each request in SuperSpeed, the max size for isoc in SuperSpeed. I know > many hosts can't handle this kind of transfer rate from their hardware. > (e.g. It gets worse when scheduling transfers for multiple endpoints and > under multiple tier hubs). > > The bandwidth can be impacted by multiple factors and not just from the > gadget side as noted in the discussion before. > > > > > This patch is simply dropping the maxburst as an multiplier for the size > > calculation and therefor decreases the overall maximum request size. > > > > Signed-off-by: Michael Grzeschik <m.grzeschik@xxxxxxxxxxxxxx> > > --- > > This patch is created as an result from the discussion in the following thread: > > > > https://urldefense.com/v3/__https://lore.kernel.org/linux-usb/20231031231841.dbhtrgqounu2rvgn@xxxxxxxxxxxx/__;!!A4F2R9G_pg!fTaIo4tDljSbEvUY5SZLkNvKWcz0YeN0Ekzs0CPWyD73RGRmErRC2frODFgnMB1M4Nse0oKKgwxC65gePhGAtauKJq1Vnzlj$ > > > > drivers/usb/gadget/function/uvc_queue.c | 1 - > > drivers/usb/gadget/function/uvc_video.c | 1 - > > 2 files changed, 2 deletions(-) > > > > diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c > > index 0aa3d7e1f3cc32..1d3c3c09ff97cb 100644 > > --- a/drivers/usb/gadget/function/uvc_queue.c > > +++ b/drivers/usb/gadget/function/uvc_queue.c > > @@ -55,7 +55,6 @@ static int uvc_queue_setup(struct vb2_queue *vq, > > sizes[0] = video->imagesize; > > > > req_size = video->ep->maxpacket > > - * max_t(unsigned int, video->ep->maxburst, 1) > > I think you're reducing a bit too much here? Also, take advantage of > burst. So, perhaps keep request size to max(16K, MPS * maxburst)? I mean to potentially cap at 16K, it didn't translate correctly to code.... I meant something like this: max_size = mult * maxburst * MPS; clamp(max_size, mult * MPS, 16K); > > Can be more or less depending on how much video data is needed to > transfer over a video frame. Can the uvc gadget figure out the max video data over a video frame? Or is it not available at this layer? I figure different setup requires more video data payload. If we have this info, then you can properly cap the request size. Thanks, Thinh