Am 20.11.19 um 13:19 schrieb Daniel Vetter:
On Wed, Nov 20, 2019 at 1:09 PM Daniel Vetter <daniel.vetter@xxxxxxxx> wrote:
On Wed, Nov 20, 2019 at 1:02 PM Christian König
<christian.koenig@xxxxxxx> wrote:
What am I missing?
The assumption is that when you want to create a vmap of a DMA-buf
buffer the buffer needs to be pinned somehow.
E.g. without dynamic dma-buf handling you would need to have an active
attachment. With dynamic handling the requirements could be lowered to
using the pin()/unpin() callbacks.
Yeah right now everyone seems to have an attachment, and that's how we
get away with all this. But the interface isn't supposed to work like
that, dma_buf_vmap/unmap is supposed to be a stand-alone thing (you
can call it directly on the struct dma_buf, no need for an
attachment). Also I don't think non-dynamic drivers should ever call
pin/unpin, not their job, holding onto a mapping should be enough to
get things pinned.
You also can't lock/unlock inside your vmap callback because you don't
have any guarantee that the pointer stays valid as soon as your drop
your lock.
Well that's why I asked where the pin/unpin pair is. If you lock&pin,
then you do know that the pointer will stay around. But looks like the
original patch from Dave for ttm based drivers played fast&loose here
with what should be done.
BTW: What is vmap() still used for?
udl, bunch of other things (e.g. bunch of tiny drivers). Not much, but
not stuff we can drop.
If we're unlucky we'll actually have a problem with these now. For
some of these the attached device is not dma-capable, so dma_map_sg
will go boom. With the cached mapping logic we now have this might go
sideways for dynamic exporters. Did you test your dynamic dma-buf
support for amdgpu with udl?
Short answer no, not at all. Long one: What the heck is udl? And how is
it not dma-capable?
Worst case we need to get rid of the fake
attachment, fix the vmap locking/pinning, and maybe some more
headaches to sort this out.
Well of hand we could require that vmap will also pin a DMA-buf and
start fixing amgpu/nouveau/radeon/qxl.
Christian.
-Daniel
-Daniel
Regards,
Christian.
Am 20.11.19 um 12:47 schrieb Daniel Vetter:
Hi all,
I've been looking at dma_buf_v(un)map, with a goal to standardize
locking for at least dynamic dma-buf exporters/importers, most likely
by requiring dma_resv_lock. And I got questions around how this is
supposed to work, since a big chunk of drivers seem to entirely lack
locking around ttm_bo_kmap. Two big ones:
- ttm_bo_kmap looks at bo->mem to figure out what/where to kmap to get
at that buffer. bo->mem is supposed to be protected with
dma_resv_lock, but at least amgpu/nouveau/radeon/qxl don't grab that
in their prime vmap function.
- between the vmap and vunmap something needs to make sure the backing
storage doesn't move around. I didn't find that either anywhere,
ttm_bo_kmap simply seems to set up the mapping, leaving locking and
refcounting to callers.
- vram helpers have at least locking, but I'm still missing the
refcounting. vmwgfx doesn't bother with vmap.
What am I missing?
Thanks, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel