On Sat, Sep 28, 2013 at 4:39 AM, Ming Lei <tom.leiming@xxxxxxxxx> wrote: > On Sat, Sep 28, 2013 at 10:29 AM, Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> wrote: >> On Sat, 28 Sep 2013, Ming Lei wrote: >> >>> On Wed, Sep 25, 2013 at 3:12 AM, Markus Rechberger >>> <mrechberger@xxxxxxxxx> wrote: >>> > This patch adds memory mapping support to USBFS for isochronous and bulk >>> > data transfers, it allows to pre-allocate usb transfer buffers. >>> > >>> > The CPU usage decreases 1-2% on my 1.3ghz U7300 notebook when >>> > transferring 20mbyte/sec, it should be more interesting to see those >>> > statistics on embedded systems where copying data is more expensive. >>> >>> Given USB3 is becoming popular and throughput is increased much, zero >>> copy should be charming. >>> >>> And another approach is to use direct I/O method(SG DMA to pages >>> allocated to user space directly), which should be more flexible, and >>> user don't need to use mmap/munmap, so should be easier to use. >>> >>> At least, wrt. usb mass storage test, both CPU utilization and throughput >>> can be improved with direct I/O. >> >> For zero-copy to work, on many systems the pages have to be allocated >> in the first 4 GB of physical memory. How can the userspace program > > It depends if device can DMA to/from 4GB above physical memory. > >> make sure this will happen? > > That can't be guaranteed but we can handle it with page bounce, just like > block device. > > Actually I observed both throughput and cpu utilization can be improved > with the 4GB of DMA limit on either 32bit arch or 64bit arch, wrt. direct I/O > over usb mass storage block device. > I didn't look into the sg API in detail yet, but isn't it doing a copy_to_user still? The current patch only takes care about the transfer_buffer, another one would focus on pre-allocating and re-using the URBs. This is a similar way as it happens on MacOSX. Markus > > Thanks, > -- > Ming Lei -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html