2013/6/17 Russell King - ARM Linux <linux@xxxxxxxxxxxxxxxx>
while (conditions) {
memcpy(va1, some data, size);
...
drm_xxx_set_src_dma_buffer(handle1, ...);
...
drm_xxx_set_dst_dma_buffer(handle2, ...);
...
On Mon, Jun 17, 2013 at 10:04:45PM +0900, Inki Dae wrote:But hang on, doesn't the dmabuf API already provide that?
> It's just to implement a thin sync framework coupling cache operation. This
> approach is based on dma-buf for more generic implementation against android
> sync driver or KDS.
>
> The described steps may be summarized as:
> lock -> cache operation -> CPU or DMA access to a buffer/s -> unlock
>
> I think that there is no need to get complicated for such approach at least
> for most devices sharing system memory. Simple is best.
The dmabuf API already uses dma_map_sg() and dma_unmap_sg() by providers,
and the rules around the DMA API are that:
dma_map_sg()
/* DMA _ONLY_ has access, CPU should not access */
dma_unmap_sg()
/* DMA may not access, CPU can access */
It's a little more than that if you include the sync_sg_for_cpu and
sync_sg_for_device APIs too - but the above is the general idea. What
this means from the dmabuf API point of view is that once you attach to
a dma_buf, and call dma_buf_map_attachment() to get the SG list, the CPU
doesn't have ownership of the buffer and _must_ _not_ access it via any
other means - including using the other dma_buf methods, until either
the appropriate dma_sync_sg_for_cpu() call has been made or the DMA
mapping has been removed via dma_buf_unmap_attachment().
So, the sequence should be:
dma_buf_map_attachment()
/* do DMA */
dma_buf_unmap_attachment()
/* CPU can now access the buffer */
Exactly right. But that is not definitely my point. Could you please see the below simple example?:
(Presume that CPU and DMA share a buffer and the buffer is mapped with user space as cachable)
handle1 = drm_gem_fd_to_handle(a dmabuf fd); ----> 1
...
...
va1 = drm_gem_mmap(handle1);
va2 = drm_gem_mmap(handle2);
va3 = malloc(size);
...
va2 = drm_gem_mmap(handle2);
va3 = malloc(size);
...
while (conditions) {
memcpy(va1, some data, size);
...
drm_xxx_set_src_dma_buffer(handle1, ...);
...
drm_xxx_set_dst_dma_buffer(handle2, ...);
...
/* user need to request cache clean at here. */ ----> 2
...
...
/* blocked until dma operation is completed. */
drm_xxx_start_dma(...);
...
drm_xxx_start_dma(...);
...
/* user need to request cache invalidate at here. */ ----> 3
memcpy(va3, va2, size);
}
}
gem_close(handle1); -----> 4
dma_buf_map_attachment() is called by number 1 and dma_buf_unmap_attachement() is called by number 4. However, how about number 2 and number 3? My point is those. As I already mentioned through a document file, user prcesses would need to request repeatedly cache operation, number 2 and 3 as long as the conditions is true because kernel side cannot be aware of when cpu accesses a shard buffer in this case. And such codes could be used by user app here and there. However, we cannot prevent such app from requesting excessive cache operations although such app are overusing cache operations unnecessarily. So my approach is just initial idea to prevent the overuse of cache operation: with this approach, all cache operations are done in kernel side and that is why I try to coulpe cache operation and buffer synchronization between CPU and DMA.
Thanks,
Inki Dae
--
To unsubscribe from this list: send the line "unsubscribe linux-fbdev" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel