On Mon, Feb 02, 2015 at 03:30:21PM -0500, Rob Clark wrote: > On Mon, Feb 2, 2015 at 11:54 AM, Daniel Vetter <daniel@xxxxxxxx> wrote: > >> My initial thought is for dma-buf to not try to prevent something than > >> an exporter can actually do.. I think the scenario you describe could > >> be handled by two sg-lists, if the exporter was clever enough. > > > > That's already needed, each attachment has it's own sg-list. After all > > there's no array of dma_addr_t in the sg tables, so you can't use one sg > > for more than one mapping. And due to different iommu different devices > > can easily end up with different addresses. > > > Well, to be fair it may not be explicitly stated, but currently one > should assume the dma_addr_t's in the dmabuf sglist are bogus. With > gpu's that implement per-process/context page tables, I'm not really > sure that there is a sane way to actually do anything else.. Hm, what does per-process/context page tables have to do here? At least on i915 we have a two levels of page tables: - first level for vm/device isolation, used through dma api - 2nd level for per-gpu-context isolation and context switching, handled internally. Since atm the dma api doesn't have any context of contexts or different pagetables, I don't see who you could use that at all. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel