On Thu, Apr 16, 2020 at 12:49:56PM +0300, Dan Carpenter wrote: > On Tue, Apr 14, 2020 at 04:18:47PM +0200, Ørjan Eide wrote: > > @@ -238,6 +242,10 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, > > struct sg_table *table, > > enum dma_data_direction direction) > > { > > + struct ion_dma_buf_attachment *a = attachment->priv; > > + > > + a->mapped = false; > > Possibly a stupid question but here we're not holding a lock. Is > concurrency an issue? Thanks for taking a look. Here and in ion_map_dma_buf(), where mapped is set, this should be safe. The callers must synchronize calls to map/unmap, and ensure that they are called in pairs. I think there may be a potential issue between where mapped is set here, and where it's read in ion_dma_buf_{begin,end}_cpu_access(). Coherency issues the mapped bool won't blow up, at worst the contents of the buffer may be invalid as cache synchronization may have been skipped. This is still an improvement as before it would either crash or sync the wrong page repeatedly. The mapped state is tied to the dma_map_sg() call, so we would need take the buffer->lock around both dma_map_sg and setting mapped to be sure that the buffer will always have been synced. > > + > > dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction); > > } > > > > @@ -297,6 +305,8 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, > > > > mutex_lock(&buffer->lock); > > list_for_each_entry(a, &buffer->attachments, list) { > > + if (!a->mapped) > > + continue; > > dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents, > > direction); > > } -- Ørjan _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel