On Mon, Nov 02, 2015 at 11:48:36AM -0500, Alan Stern wrote: > Order 7? Maybe you're trying to put too much data into a single > transfer and encountering problems with memory fragmentation. Try > using more frequent, smaller transfers. Yes, my transfers are rather big; 512 kB or so. (They used to be 2 MB.) Somehow when you want to stream a gigabit or so of data, 16 kB transfers would seem inefficient :-) I've tried in the past to have fewer and just have more of them in transit, but there seems to be a limit as to how many you can have. Note that this is receive, not send. > Once the data is in the kernel, the rest of the procedure is basically > zero-copy. The problem is getting it there from within your program. > We currently don't have any support for zero-copy data submissions, > although it has been proposed a few times in the past. Do you know if there's any plans to revive these proposals? I've seen them in the archives, too, but they all seem to have died down. It seems each stream costs me ~15% or so of one core (mostly in copy_user_enhanced_fast_string and memset_erms, which I suppose are about copying to user space), and I'm a bit strapped for CPU already. /* Steinar */ -- Homepage: http://www.sesse.net/ -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html