On 1/16/19 8:36 AM, Christoph Hellwig wrote: > On Wed, Jan 16, 2019 at 07:30:02AM +0100, Gerd Hoffmann wrote: >> Hi, >> >>> + if (!dma_map_sg(dev->dev, xen_obj->sgt->sgl, xen_obj->sgt->nents, >>> + DMA_BIDIRECTIONAL)) { >>> + ret = -EFAULT; >>> + goto fail_free_sgt; >>> + } >> Hmm, so it seems the arm guys could not come up with a suggestion how to >> solve that one in a better way. Ok, lets go with this then. >> >> But didn't we agree that this deserves a comment exmplaining the purpose >> of the dma_map_sg() call is to flush caches and that there is no actual >> DMA happening here? > Using a dma mapping call to flush caches is complete no-go. But the > real question is why you'd even want to flush cashes if you do not > want a dma mapping? > > This whole issue keeps getting more and more confusing. Well, I don't really do DMA here, but instead the buffers in question are shared with other Xen domain, so effectively it could be thought of some sort of DMA here, where the "device" is that remote domain. If the buffers are not flushed then the remote part sees some inconsistency which in my case results in artifacts on screen while displaying the buffers. When buffers are allocated via DMA API then there are no artifacts; if buffers are allocated with shmem + DMA mapping then there are no artifacts as well. The only offending use-case is when I use shmem backed buffers, but do not flush them _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel