Hey Daniel, back from vacation and going over our last long
thread i think you didn't reply
to my last question bellow (Or at least I can't find it).
Andrey
On 12/17/20 4:13 PM, Andrey Grodzovsky
wrote:
Ok, so I assumed
that with vmap_local you were trying to solve the problem of
quick reinsertion
of another device into same MMIO range that my driver still
points too but
actually are you trying to solve
the issue of exported dma buffers outliving the device ? For
this we have
drm_device refcount in the GEM layer
i think.
That's completely different lifetime problems. Don't mix them up
:-)
One problem is the hardware disappearing, and for that we _have_ to
guarantee timeliness, or otherwise the pci subsystem gets pissed
(since like you say, a new device might show up and need it's
mmio
bars assigned to io ranges). The other is lifetim of the
software
objects we use as interfaces, both from userspace and from other
kernel drivers. There we fundamentally can't enforce timely
cleanup,
and have to resort to refcounting.
So regarding the second issue, as I mentioned above, don't we
already use drm_dev_get/put
for exported BOs ? Earlier in this discussion you mentioned that
we are ok for dma buffers since
we already have the refcounting at the GEM layer and the real life
cycle problem we have is the dma_fences
for which there is no drm_dev refcounting. Seems to me then that
vmap_local is superfluous because
of the recounting we already have for exported dma_bufs and for
dma_fences it won't help.
Andrey
|
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel