From: Jérôme Carretero > Hi Sarah, > > I was happily using big (10MB) buffers before, and with recent kernels, > when using USB3, I had to reduce the size of my buffers a lot. > By the way, I couldn't find any information on a maximum size for the > bulk transfers using libusb, maybe you know about that also ? > > So, using v3.13, this what I get from the kernel when doing a bulk read > of 4 MiB: > > [ 506.856282] xhci_hcd 0000:00:14.0: Too many fragments 256, max 63 > [ 506.856288] usb 4-5: usbfs: usb_submit_urb returned -12 ... > I saw your 3.12-td-fragment-failure branch and tried it; there, > sometimes the transfers don't work, with: > > xhci_hcd 0000:00:14.0: WARN Event TRB for slot 10 ep 4 with no TDs queued? That shouldn't happen, otoh the xchi code is too complicated and it doesn't actually surprise me. Without some specific traces in the normal paths it is probably impossible to work out what went wrong. > python2: page allocation failure: order:10, mode:0x1040d0 That is just a failure to allocate a 4MB block of kernel memory. Trying to allocate a block that large is rather doomed. It looks like the code is allocation a contiguous buffer (virtual and physical) for the request, and then somewhere it is being split into separate address:length pairs for each 4k physical page. Given the names of the fields of 'struct scatterlist' I suspect the original use required a separate entry for each page. I've not looked at the dma mapping code (nop on x86) to see if it does actually map multiple pages for a single sg entry. In any case the xhci driver doesn't need separate fragments for each page - so could usefully detect adjacent fragments and use a single TD for them. (Which wouldn't help if the ioctl code allocated fragmented buffers.) I might try to cook up a patch that will help aligned transfers. David ��.n��������+%������w��{.n�����{���)��jg��������ݢj����G�������j:+v���w�m������w�������h�����٥