On Wed, 2009-12-23 at 15:22 -0600, James Bottomley wrote: > #define flush_kernel_dcache_range(start,size) \ > flush_kernel_dcache_range_asm((start), (start)+(size)); > +/* vmap range flushes and invalidates. Architecturally, we don't need > + * the invalidate, because the CPU should refuse to speculate once an > + * area has been flushed, so invalidate is left empty */ > +static inline void flush_kernel_vmap_range(void *vaddr, int size) > +{ > + unsigned long start = (unsigned long)vaddr; > + > + flush_kernel_dcache_range_asm(start, start + size); > +} > +static inline void invalidate_kernel_vmap_range(void *vaddr, int size) > +{ > +} Do I understand correctly that for an inbound DMA you will first call flush before starting the DMA, then invalidate at the end of the transfer ? See my other message on that subject but I believe this is a sub-optimal semantic. I'd rather expose separately dma_vmap_sync_outbound vs. dma_vma_sync_inboud_before vs. dma_vma_sync_inboud_after. On quite a few archs, an invalidate is a lot faster than a flush (since it doesn't require a writeback of potentially useless crap to memory) and for an inbound transfer that doesn't cross cache line boundaries, invalidate is all that's needed for both before and after. On 44x additionally I don't need "after" since the core is too dumb to prefetch (or rather it's disabled due to erratas). Cheers, Ben. -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html