On 09/28/2011 04:02 PM, Benjamin Herrenschmidt wrote: > On Wed, 2011-09-28 at 12:27 -0500, Scott Wood wrote: > >> Why would it need to be synchronous? Even if it's asynchronous emulated >> DMA, we don't want it sitting around only in a data cache that >> instruction fetches won't snoop. > > Except that this is exactly what happens on real HW :-) DMA does not normally go straight to data cache, at least on hardware I'm familiar with. > The guest will do the necessary invalidations. DMA doesn't keep the > icache coherent on HW, why should it on kvm/qemu ? Sure, if there might be stale stuff in the icache, the guest will need to invalidate that. But when running on real hardware, an OS does not need to flush it out of data cache after a DMA transaction[1]. So technically we just want a flush_dcache_range() for DMA. It's moot unless we can distinguish DMA writes from breakpoint writes, though. -Scott [1] Most OSes may do this anyway, to avoid needing to special case when the dirtying is done entirely by DMA (or to avoid making assumptions that could be broken by weird hardware), but that doesn't mean QEMU/KVM should assume that -- maybe unless there's enough performance to be gained by looking like the aforementioned "weird hardware" in certain configurations. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html