Hi, I'm working on a project that talks to an STM32 MCU via a Full-Speed USB CDC_ACM link, which binds to the cdc_acm driver. I initially used dd to measure throughput and all was well. Then I switched to libusb and found that throughput dropped by 50%. I traced the root cause to a something the cdc_acm driver does, which my libusb code didn't. But I don't understand why this makes a difference and was hoping someone on the list would be kind enough to educate me. In my test, the traffic was just one way: Host->Device. So I did not bother to intiate any IN transfers. It turns out that even without any IN traffic, the absence of any outstanding IN transfers drops the OUT throughput by half. Wireshark showed me that using dd to send data to the device actually does result in some IN transfers showing up at the very start of the trace, though never completing. I think (could be wrong) I traced these IN transfers to their origin in cdc-acm.c. there's a call to acm_submit_read_urb in acm_port_activate(): """ retval = acm_submit_read_urbs(acm, GFP_KERNEL); if (retval) goto error_submit_read_urbs; usb_autopm_put_interface(acm->control); """ which I read as initiating some read urbs as soon as comms start, exactly what I saw. I blamed this line back to 2011, when it showed up after an unrelated refactor in 088c64f81284. Back then, this call to acm_submit_read_urbs() lived inside acm_tty_open(). I've looked at the "Complete USB" book, the libusb mailing list and google. I could not find any explanation of why this behavior is needed - why do pending IN transfers have such an effect on throughput when there is zero IN traffic from the device? Your wisdom much appreciated, Jerri CC @jhovold (author of 088c64f81284, apologies if that's not ok)