On Mon, Aug 05, 2019 at 03:02:40PM +0200, Greg Kroah-Hartman wrote:
[ Upstream commit 449fa54d6815be8c2c1f68fa9dbbae9384a7c03e ] dma_map_sg() may use swiotlb buffer when the kernel command line includes "swiotlb=force" or the dma_addr is out of dev->dma_mask range. After DMA complete the memory moving from device to memory, then user call dma_sync_sg_for_cpu() to sync with DMA buffer, and copy the original virtual buffer to other space. So dma_direct_sync_sg_for_cpu() should use swiotlb physical addr, not the original physical addr from sg_phys(sg). dma_direct_sync_sg_for_device() also has the same issue, correct it as well. Fixes: 55897af63091("dma-direct: merge swiotlb_dma_ops into the dma_direct code") Signed-off-by: Fugang Duan <fugang.duan@xxxxxxx> Reviewed-by: Robin Murphy <robin.murphy@xxxxxxx> Signed-off-by: Christoph Hellwig <hch@xxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>
I'm going to drop this one. There's a fix to it upstream, but the fix also seems to want 0036bc73ccbe ("drm/msm: stop abusing dma_map/unmap for cache") which we're not taking, so I'm just going to drop this one as well. If someone wants it in the stable trees, please send a tested backport. -- Thanks, Sasha