From: Arnd Bergmann <arnd@xxxxxxxx> Leon has a very minimalistic cache that has no range operations and requires being flushed entirely to deal with noncoherent DMA. Most in-order architectures do their cache management in the dma_sync_*for_device() operations rather than dma_sync_*for_cpu. Since the cache is write-through only, both should have the same effect, so change it for consistency with the other architectures. Signed-off-by: Arnd Bergmann <arnd@xxxxxxxx> --- arch/sparc/Kconfig | 2 +- arch/sparc/kernel/ioport.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 84437a4c6545..637da50e236c 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -51,7 +51,7 @@ config SPARC config SPARC32 def_bool !64BIT select ARCH_32BIT_OFF_T - select ARCH_HAS_SYNC_DMA_FOR_CPU + select ARCH_HAS_SYNC_DMA_FOR_DEVICE select CLZ_TAB select DMA_DIRECT_REMAP select GENERIC_ATOMIC64 diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c index 4e4f3d3263e4..4f3d26066ec2 100644 --- a/arch/sparc/kernel/ioport.c +++ b/arch/sparc/kernel/ioport.c @@ -306,7 +306,7 @@ arch_initcall(sparc_register_ioport); * On LEON systems without cache snooping, the entire D-CACHE must be flushed to * make DMA to cacheable memory coherent. */ -void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { if (dir != DMA_TO_DEVICE && -- 2.39.2