Re: [RFC] replacing dma_resv API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 21, 2019 at 02:31:37PM +0200, Christian König wrote:
> Hi everyone,
> 
> In previous discussion it surfaced that different drivers use the shared
> and explicit fences in the dma_resv object with different meanings.
> 
> This is problematic when we share buffers between those drivers and
> requirements for implicit and explicit synchronization leaded to quite a
> number of workarounds related to this.
> 
> So I started an effort to get all drivers back to a common understanding
> of what the fences in the dma_resv object mean and be able to use the
> object for different kind of workloads independent of the classic DRM
> command submission interface.
> 
> The result is this patch set which modifies the dma_resv API to get away
> from a single explicit fence and multiple shared fences, towards a
> notation where we have explicit categories for writers, readers and
> others.
> 
> To do this I came up with a new container called dma_resv_fences which
> can store both a single fence as well as multiple fences in a
> dma_fence_array.
> 
> This turned out to actually be even be quite a bit simpler, since we
> don't need any complicated dance between RCU and sequence count
> protected updates any more.
> 
> Instead we can just grab a reference to the dma_fence_array under RCU
> and so keep the current state of synchronization alive until we are done
> with it.
> 
> This results in both a small performance improvement since we don't need
> so many barriers any more, as well as fewer lines of code in the actual
> implementation.

I think you traded lack of barriers/retry loops for correctness here, see
reply later on. But I haven't grokked the full thing in details, so easily
might have missed something.

But high level first, and I don't get this at all. Current state:

Ill defined semantics, no docs. You have to look at the implementations.

New state after you patch series:

Ill defined semantics (but hey different!), no docs. You still have to
look at the implementations to understand what's going on.

I think what has actually changed (aside from the entire implementation)
is just these three things:
- we now allow multiple exclusive fences
- exclusive was renamed to writer fences, shared to reader fences
- there's a new "other" group, for ... otherwordly fences?

Afaiui we have the following to issues with the current fence semantics:
- amdgpu came up with a totally different notion of implicit sync, using
  the owner to figure out when to sync. I have no idea at all how that
  meshes with multiple writers, but I guess there's a connection.
- amdkfd does a very fancy eviction/preempt fence. Is that what the other
  bucket is for?

I guess I could read the amdgpu/ttm code in very fine detail and figure
this out, but I really don't see how that's moving stuff forward.

Also, I think it'd be really good to decouple semantic changes from
implementation changes, because untangling them if we have to revert one
or the other is going to be nigh impossible. And dma_* is not really an
area where we can proudly claim that reverts don't happen.

Cheers, Daniel

> 
> Please review and/or comment,
> Christian. 
> 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux