Recent CPUs from Imagination Technologies such as the I6400 or P6600 are able to speculatively fetch data from memory into caches. This means that if used in a system with non-coherent DMA they require that caches be invalidated after a device performs DMA, and before the CPU reads the DMA'd data, in order to ensure that stale values weren't speculatively prefetched. Such CPUs also introduced Memory Accessibility Attribute Registers (MAARs) in order to control the regions in which they are allowed to speculate. Thus we can use the presence of MAARs as a good indication that the CPU requires the above cache maintenance. Use the presence of MAARs to determine the result of cpu_needs_post_dma_flush() in the default case, in order to handle these recent CPUs correctly. Signed-off-by: Paul Burton <paul.burton@xxxxxxxxxx> Cc: Ralf Baechle <ralf@xxxxxxxxxxxxxx> Cc: linux-mips@xxxxxxxxxxxxxx --- arch/mips/mm/dma-default.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c index fe8df14b6169..48f0354b1b09 100644 --- a/arch/mips/mm/dma-default.c +++ b/arch/mips/mm/dma-default.c @@ -70,10 +70,18 @@ static inline struct page *dma_addr_to_page(struct device *dev, */ static inline int cpu_needs_post_dma_flush(struct device *dev) { - return !plat_device_is_coherent(dev) && - (boot_cpu_type() == CPU_R10000 || - boot_cpu_type() == CPU_R12000 || - boot_cpu_type() == CPU_BMIPS5000); + if (plat_device_is_coherent(dev)) + return 0; + + switch (boot_cpu_type()) { + case CPU_R10000: + case CPU_R12000: + case CPU_BMIPS5000: + return 1; + + default: + return cpu_has_maar; + } } static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp) -- 2.13.1