Re: [PATCH 1/6] dma-buf: add dynamic DMA-buf handling v8

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 22, 2019 at 1:27 PM Daniel Vetter <daniel@xxxxxxxx> wrote:
>
> On Wed, May 22, 2019 at 10:49 AM Christian König
> <ckoenig.leichtzumerken@xxxxxxxxx> wrote:
> >
> > Am 22.05.19 um 10:19 schrieb Daniel Vetter:
> > > On Wed, May 15, 2019 at 04:38:26PM +0200, Christian König wrote:
> > > [SNAP]
> > > Just this functional comment, since I think api detail polishing is
> > > premature if we're not yet aware of how this works.
> > >
> > >> +    /* When the importer is dynamic but the exporter isn't we need to cache
> > >> +     * the mapping or otherwise would run into issues with the reservation
> > >> +     * object lock.
> > >> +     */
> > >> +    if (dma_buf_attachment_is_dynamic(attach) &&
> > >> +        !dma_buf_is_dynamic(dmabuf)) {
> > > Isn't this the wrong way round? dynamic importers should be perfectly fine
> > > with the reservation locks in their map/unmap paths, it's importers
> > > calling exporters there.
> > >
> > > The real problem is a not-dynamic importer, which hasn't be adjusted to
> > > allow the reservation lock in their paths where they map/unmap a buffer,
> > > with a dynamic exporter. That's where we need to cache the mapping to
> > > avoid the deadlock (or having to change everyone)
> >
> > Well could be that this is also a problem, but I actually don't think so.
> >
> > The case I'm describing here certainly is the more obvious problem
> > because the importer is already holding the lock the exporter wants to take.
>
> Hm, isn't that papered over by such exporters enabling the dma-buf
> caching you've just moved from drm_prime to dma-buf?
>
> > On the other hand we could rather easily change that check to
> > dma_buf_attachment_is_dynamic() != dma_buf_is_dynamic() if that is
> > indeed a problem.
>
> Hm yeah.

Forgot to add: Iirc it was buffer sharing between i915 and amdgpu that
hits this. Can't say for sure since intel-gfx isn't cc'ed on this
version, so our CI hasn't picked this up.
-Daniel

>
> > >> +            struct sg_table *sgt;
> > >> +
> > >> +            sgt = dmabuf->ops->map_dma_buf(attach, DMA_BIDIRECTIONAL);
> > > And unfortunately the non-dynamic, i.e. legacy/current code importer is
> > > also the one which uses other flags than DMA_BIDIRECTIONAL. At least on
> > > ARM, and that's the only place where this matters because there the dma
> > > api might do cache flushing.
> >
> > Well the only implementer for now is amdgpu, and amdgpu always requires
> > a coherent bidirectional mapping.
> >
> > So this won't be a problem unless the ARM drivers start to implement
> > dynamic DMA-buf handling themselves or start to talk to amdgpu (which
> > wouldn't have worked before anyway).
>
> Hm, feels a bit evil. I just heard a some soc people tell me that drm
> isn't cool because it likes to ignore soc at the expensive of x86.
>
> Otoh I don't see a way out either, and maybe this will be motivation
> enough to make the dma-api cache management a bit more explicit. Not
> that I expect much change there ...
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch



-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux