On Mon, Mar 27, 2023 at 5:14 AM Arnd Bergmann <arnd@xxxxxxxxxx> wrote:
From: Arnd Bergmann <arnd@xxxxxxxx> xtensa is one of the platforms that has both write-back and write-through caches, and needs to account for both in its DMA mapping operations. It does this through a set of operations that is different from any architecture. This is not a problem by itself, but it makes it rather hard to figure out whether this is correct or not, and to unify this implementation with the others. Change the semantics to the usual ones for non-speculating CPUs: - On DMA_TO_DEVICE, call __flush_dcache_range() to perform the writeback even on writethrough caches, where this is a nop. - On DMA_FROM_DEVICE, invalidate the mapping before the DMA rather than afterwards. - On DMA_BIDIRECTIONAL, combine the pre-writeback with the post-invalidate into a call to __flush_invalidate_dcache_range() that turns into a simple invalidate on writeback caches. Signed-off-by: Arnd Bergmann <arnd@xxxxxxxx> --- arch/xtensa/Kconfig | 1 - arch/xtensa/include/asm/cacheflush.h | 6 +++--- arch/xtensa/kernel/pci-dma.c | 29 +++++----------------------- 3 files changed, 8 insertions(+), 28 deletions(-)
Reviewed-by: Max Filippov <jcmvbkbc@xxxxxxxxx> -- Thanks. -- Max