Hi, Michal Nazarewicz <mina86@xxxxxxxxxx> writes: >>>> Here's how f_fs.c works today: >>>> >>>> write(ep2, buf, length); >>>> ffs_epfile_write_iter() >>>> ffs_epfile_io() >>>> usb_ep_queue() >>>> wait_for_completion_interruptible() >>>> >>>> That wait_for_completion_interruptible() is what's killing >>>> performance. Each and every read/write waits for the USB side to >>>> complete. It would've been much better to have something like: >>>> >>>> if (flags & O_NONBLOCK) >>>> wait_for_completion_interruptible() > >> Michal Nazarewicz <mina86@xxxxxxxxxx> writes: >>> We cannot return to user space before the transfer is completed >>> though. > > On Fri, Apr 21 2017, Felipe Balbi wrote: >> why not? We already copy_from_user() to own kernel buffer. > > Ah, right. This would work for write, yes. > > (On entry, write would have return -EAGAIN if O_NONBLOCK and there’s an > active in request on the endpoint but that’s just implementation > detail). why would it have to do that? We would allocate a new struct usb_request, allocate a new buffer, copy_from_user() and usb_ep_queue(). Why would we return -EAGAIN? >>> The advantage of async IO is that user space has more control over what >>> reads and writes happen. f_fs doesn’t know the underlying protocol and >>> I can imagine that in some cases that would matter. > >> USB is always first-come-first-served, right? When would this "control >> over what happens" be useful? > > What if user space doesn’t want to read? If kernel space keeps an > active out request, user space has no control whether it can not respond > to an out request. I'm not sure this is something we should care about. Note how every gadget driver, apart from f_fs, pre-queues several IN and OUT requests. > Maybe this is theoretical, but I wouldn’t be surprised if there was an > USB protocol of some kind where this matters. I doubt there would be any protocol relying on NAKs. -- balbi -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html