On Fri, Jun 10, 2022 at 02:43:07AM +0200, Heiko Stuebner wrote: > +config RISCV_ISA_ZICBOM > + bool "Zicbom extension support for non-coherent dma operation" > + select ARCH_HAS_DMA_PREP_COHERENT > + select ARCH_HAS_SYNC_DMA_FOR_DEVICE > + select ARCH_HAS_SYNC_DMA_FOR_CPU > + select ARCH_HAS_SETUP_DMA_OPS > + select DMA_DIRECT_REMAP > + select RISCV_ALTERNATIVE > + default y > + help > + Adds support to dynamically detect the presence of the ZICBOM extension Overly long line here. > + (Cache Block Management Operations) and enable its usage. > + > + If you don't know what to do here, say Y. But more importantly I think the whole text here is not very helpful. What users care about is non-coherent DMA support. What extension is used for that is rather secondary. Also please capitalize DMA. > +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) > +{ > + switch (dir) { > + case DMA_TO_DEVICE: > + ALT_CMO_OP(CLEAN, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size); > + break; > + case DMA_FROM_DEVICE: > + ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size); > + break; > + case DMA_BIDIRECTIONAL: > + ALT_CMO_OP(FLUSH, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size); > + break; > + default: > + break; > + } Pleae avoid all these crazy long lines. and use a logical variable for the virtual address. And why do you pass that virtual address as an unsigned long to ALT_CMO_OP? You're going to make your life much easier if you simply always pass a pointer. Last but not last, does in RISC-V clean mean writeback and flush mean writeback plus invalidate? If so the code is correct, but the choice of names in the RISC-V spec is extremely unfortunate. > +void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, enum dma_data_direction dir) > +{ > + switch (dir) { > + case DMA_TO_DEVICE: > + break; > + case DMA_FROM_DEVICE: > + case DMA_BIDIRECTIONAL: > + ALT_CMO_OP(INVAL, (unsigned long)phys_to_virt(paddr), size, riscv_cbom_block_size); > + break; > + default: > + break; > + } > +} Same comment here and in few other places. > + > +void arch_dma_prep_coherent(struct page *page, size_t size) > +{ > + void *flush_addr = page_address(page); > + > + memset(flush_addr, 0, size); > + ALT_CMO_OP(FLUSH, (unsigned long)flush_addr, size, riscv_cbom_block_size); > +} arch_dma_prep_coherent should never zero the memory, that is left for the upper layers.` > +void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, > + const struct iommu_ops *iommu, bool coherent) > +{ > + /* If a specific device is dma-coherent, set it here */ This comment isn't all that useful. > + dev->dma_coherent = coherent; > +} But more importantly, this assums that once this code is built all devices are non-coherent by default. I.e. with this patch applied and the config option enabled we'll now suddenly start doing cache management operations or setups that didn't do it before.