Am 20.04.2018 um 12:17 schrieb Christoph Hellwig:
On Fri, Apr 20, 2018 at 10:58:50AM +0200, Christian König wrote:
Yes there's a bit a layering violation insofar that drivers really
shouldn't each have their own copy of "how do I convert a piece of dma
memory into dma-buf", but that doesn't render the interface a bad idea.
Completely agree on that.
What we need is an sg_alloc_table_from_resources(dev, resources,
num_resources) which does the handling common to all drivers.
A structure that contains
{page,offset,len} + {dma_addr+dma_len}
is not a good container for storing
{virt addr, dma_addr, len}
no matter what interface you build arond it.
Why not? I mean at least for my use case we actually don't need the
virtual address.
What we need is {dma_addr+dma_len} in a consistent interface which can
come from both {page,offset,len} as well as {resource, len}.
What I actually don't need is separate handling for system memory and
resources, but that would we get exactly when we don't use sg_table.
Christian.
And that is discounting
all the problems around mapping coherent allocations for other devices,
or the iommu merging problem we are having another thread on.
So let's come up with a better high level interface first, and then
worrty about how to implement it in the low-level dma-mapping interface
second. Especially given that my consolidation of the dma_map_ops
implementation is in full stream and there shoudn't be all that many
to bother with.
So first question: Do you actually care about having multiple
pairs of the above, or instead of all chunks just deal with a single
of the above? In that case we really should not need that many
new interfaces as dma_map_resource will be all you need anyway.
Christian.
-Daniel
---end quoted text---
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel