From: Alan Stern > On Fri, 22 Aug 2014, Dan Williams wrote: > > > v1.0 hosts require that TD-fragments (portions of a TD that do not end > > on a MPB boundary) not cross a TRB segment boundary. This constraint is > > in addition to the constraint that a TRB may not specify a transfer that > > crosses a 64K boundary. This enabling permits the driver to accept > > scatterlists of nearly any geometry. "Nearly" because there is one > > unlikely remaining degenerate case of a driver submitting a transfer > > that consumes all the TRBs in a segment before hitting an MBP boundary. > > That case is trapped and the transfer is rejected. > > That last part sounds problematic. The issue will arise at > unpredictable times, depending on the lengths of the scatterlists that > have been submitted in the past, so it's not easily reproducible. What > is the caller supposed to do when this happens? > > As for the likelihood of this occurring... A ring segment is one page, > so 4096 bytes. Each TRB is 16 bytes, so a segment can hold 256 TRBs. > With a bulk maxpacket size of 1024 and a (typical?) maxburst size of 8, > MPB boundaries occur every 8192 bytes. Therefore if a scatterlist > contains more than 256 entries, with an average length < 32 bytes, it > is likely trigger this condition (depending on the exact alignment with > respect to the MPB boundary). I don't know exactly how unlikely such a > situation is, but it's not hard to imagine a network packet composed of > lots of little pieces. usbnet won't generate anything that horrid. An skb has a maximum of (about) 17 fragments, skbs won't be chained to make a single packet. > Instead of failing the submission, we ought to set up some sort of > bounce buffers. I have no good suggestions on how to implement that, > however. I'd treat the rx and tx cases separately (rx is much less likely). For tx you could use back end of the ring page as a bounce buffer. David -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html