On Mon, Feb 2, 2015 at 4:46 PM, Russell King - ARM Linux <linux@xxxxxxxxxxxxxxxx> wrote: > On Mon, Feb 02, 2015 at 03:30:21PM -0500, Rob Clark wrote: >> On Mon, Feb 2, 2015 at 11:54 AM, Daniel Vetter <daniel@xxxxxxxx> wrote: >> >> My initial thought is for dma-buf to not try to prevent something than >> >> an exporter can actually do.. I think the scenario you describe could >> >> be handled by two sg-lists, if the exporter was clever enough. >> > >> > That's already needed, each attachment has it's own sg-list. After all >> > there's no array of dma_addr_t in the sg tables, so you can't use one sg >> > for more than one mapping. And due to different iommu different devices >> > can easily end up with different addresses. >> >> >> Well, to be fair it may not be explicitly stated, but currently one >> should assume the dma_addr_t's in the dmabuf sglist are bogus. With >> gpu's that implement per-process/context page tables, I'm not really >> sure that there is a sane way to actually do anything else.. > > That's incorrect - and goes dead against the design of scatterlists. yeah, a bit of an abuse, although I'm not sure I see a much better way when a device vaddr depends on user context.. > Not only that, but it is entirely possible that you may get handed > memory via dmabufs for which there are no struct page's associated > with that memory - think about display systems which have their own > video memory which is accessible to the GPU, but it isn't system > memory. well, I guess anyways when it comes to sharing buffers, it won't be the vram placement of the bo that gets shared ;-) BR, -R > In those circumstances, you have to use the dma_addr_t's and not the > pages. > > -- > FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up > according to speedtest.net. _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel