On 08/04/2018 01:00 AM, gregkh@xxxxxxxxxxxxxxxxxxx wrote: > The patch below does not apply to the 4.17-stable tree. > If someone wants it applied there, or to any other stable or longterm > tree, then please email the backport, including the original git commit > id to <stable@xxxxxxxxxxxxxxx>. > > thanks, > > greg k-h Sorry my bad, by we don't need it after all. The patch this was fixing was merged in 4.18 and not 4.17 so we can drop this. Thx, -Vineet > > ------------------ original commit in Linus's tree ------------------ > > From 4c612add7b18844ddd733ebdcbe754520155999b Mon Sep 17 00:00:00 2001 > From: Eugeniy Paltsev <Eugeniy.Paltsev@xxxxxxxxxxxx> > Date: Tue, 24 Jul 2018 17:13:02 +0300 > Subject: [PATCH] ARC: dma [non IOC]: fix arc_dma_sync_single_for_(device|cpu) > > ARC backend for dma_sync_single_for_(device|cpu) was broken as it was > not honoring the @dir argument and simply forcing it based on the call: > - arc_dma_sync_single_for_device(dir) assumed DMA_TO_DEVICE (cache wback) > - arc_dma_sync_single_for_cpu(dir) assumed DMA_FROM_DEVICE (cache inv) > > This is not true given the DMA API programming model and has been > discussed here [1] in some detail. > > Interestingly while the deficiency has been there forever, it only started > showing up after 4.17 dma common ops rework, commit a8eb92d02dd7 > ("arc: fix arc_dma_{map,unmap}_page") which wired up these calls under the > more commonly used dma_map_page API triggering the issue. > > [1]: https://urldefense.proofpoint.com/v2/url?u=https-3A__lkml.org_lkml_2018_5_18_979&d=DwIDAg&c=DPL6_X_6JkXFx7AXWqB0tg&r=c14YS-cH-kdhTOW89KozFhBtBJgs1zXscZojEZQ0THs&m=7aV-FuW2twpiMTknC5mcDf1AngIn1HqXHwIhZoysGW4&s=EeRw-AeiwFbuvEid0L-9vik7eJbDwyNrwCjmcHVSSyo&e= > Fixes: commit a8eb92d02dd7 ("arc: fix arc_dma_{map,unmap}_page") > Cc: stable@xxxxxxxxxx # v4.17+ > Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@xxxxxxxxxxxx> > Signed-off-by: Vineet Gupta <vgupta@xxxxxxxxxxxx> > [vgupta: reworked changelog] > > diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c > index 8c1071840979..ec47e6079f5d 100644 > --- a/arch/arc/mm/dma.c > +++ b/arch/arc/mm/dma.c > @@ -129,14 +129,59 @@ int arch_dma_mmap(struct device *dev, struct vm_area_struct *vma, > return ret; > } > > +/* > + * Cache operations depending on function and direction argument, inspired by > + * https://urldefense.proofpoint.com/v2/url?u=https-3A__lkml.org_lkml_2018_5_18_979&d=DwIDAg&c=DPL6_X_6JkXFx7AXWqB0tg&r=c14YS-cH-kdhTOW89KozFhBtBJgs1zXscZojEZQ0THs&m=7aV-FuW2twpiMTknC5mcDf1AngIn1HqXHwIhZoysGW4&s=EeRw-AeiwFbuvEid0L-9vik7eJbDwyNrwCjmcHVSSyo&e= > + * "dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] > + * dma-mapping: provide a generic dma-noncoherent implementation)" > + * > + * | map == for_device | unmap == for_cpu > + * |---------------------------------------------------------------- > + * TO_DEV | writeback writeback | none none > + * FROM_DEV | invalidate invalidate | invalidate* invalidate* > + * BIDIR | writeback+inv writeback+inv | invalidate invalidate > + * > + * [*] needed for CPU speculative prefetches > + * > + * NOTE: we don't check the validity of direction argument as it is done in > + * upper layer functions (in include/linux/dma-mapping.h) > + */ > + > void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, > size_t size, enum dma_data_direction dir) > { > - dma_cache_wback(paddr, size); > + switch (dir) { > + case DMA_TO_DEVICE: > + dma_cache_wback(paddr, size); > + break; > + > + case DMA_FROM_DEVICE: > + dma_cache_inv(paddr, size); > + break; > + > + case DMA_BIDIRECTIONAL: > + dma_cache_wback_inv(paddr, size); > + break; > + > + default: > + break; > + } > } > > void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr, > size_t size, enum dma_data_direction dir) > { > - dma_cache_inv(paddr, size); > + switch (dir) { > + case DMA_TO_DEVICE: > + break; > + > + /* FROM_DEVICE invalidate needed if speculative CPU prefetch only */ > + case DMA_FROM_DEVICE: > + case DMA_BIDIRECTIONAL: > + dma_cache_inv(paddr, size); > + break; > + > + default: > + break; > + } > } > >