On Fri, 2010-01-29 at 16:41 +0000, Oliver Neukum wrote: > Am Freitag, 29. Januar 2010 17:34:03 schrieb Catalin Marinas: > > > I was thinking about checking dev->bus->controller->dma_mask which the > > code (though not the storage one) seems to imply that if the dma_mask is > > 0, the HCD driver is only capable of PIO. > > That a HCD is capable of DMA need not imply that DMA is used for every > transfer. Actually the DMA drivers are safe in this respect only if the transfer happens directly to a page cache page that may be (later) mapped into user space. I'm not familiar with the USB drivers to fully understand the data flow, so any help would be appreciated. > > That would be a more general solution rather than going through each HCD > > driver since my understanding is that flush_dcache_page() is only needed > > together with the mass storage support. > > What about ub, nfs or nbd over a USB<->ethernet converter? > This, I am afraid is best solved at the HCD or glue layer. NFS handles the cache flushing itself, so in this case there is no need to duplicate the cache flushing at the HCD level. AFAICT, the HCD driver may be used in several cases and it's only the storage case (via either ub, mass storage etc.) that requires cache flushing. Is there a way to differentiate between these at the HCD driver level? Regarding nbd, is there any copying happening between the HCD driver receiving the network packet from the USB-ethernet converter and the nbd bio_vec buffers (most likely during the TCP/IP stack flow)? In this case it would be for the nbd driver (doesn't seem to be the case now) to flush the D-cache as the HCD flushing is not necessary as long as it doesn't write directly to the page cache page. The ub case is similar to the USB mass storage one, so they could both benefit from flushing at the HCD driver level. But is this possible without duplicating the flushing in the nfs case? Regards. -- Catalin -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html