Re: [PATCH 22/22] drm/i915: Export our request as a dma-buf fence on the reservation object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 28, 2016 at 01:17:39PM +0100, Chris Wilson wrote:
> On Thu, Jul 28, 2016 at 01:59:45PM +0200, Daniel Vetter wrote:
> > On Thu, Jul 28, 2016 at 11:40:29AM +0100, Chris Wilson wrote:
> > > On Thu, Jul 28, 2016 at 12:32:42PM +0200, Daniel Vetter wrote:
> > > > On Wed, Jul 27, 2016 at 12:15:00PM +0100, Chris Wilson wrote:
> > > > > If the GEM objects being rendered with in this request have been
> > > > > exported via dma-buf to a third party, hook ourselves into the dma-buf
> > > > > reservation object so that the third party can serialise with our
> > > > > rendering via the dma-buf fences.
> > > > > 
> > > > > Testcase: igt/prime_busy
> > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > > > 
> > > > Style nit: I prefer ww_mutex_lock(&resv->lock, NULL); over
> > > > mutex_lock(&resv->lock.base). The former makes it clear it's a ww mutex,
> > > > but we don't bother with the multi-lock dance. The latter needles around
> > > > in implemenation details, which it really shouldn't. Please change.
> > > 
> > > Passing NULL as ww_acquite_ctx is illegal.
> > 
> > Hm, where exactly do you see that? kerneldoc for ww_mutex_lock clearly
> > says that it can be NULL, and the static inline has a check for it and
> > calls mutex_lock in the else path. Which means it /should/ boil down to
> > the exact same code after gcc has pondered it enough.
> 
> But then explodes. Look at the lockdep. Clearly the kerneldoc is wrong.
> Good job I was reading the code :-p

Hm, that'd be a bug in the ww_mutex_lock, and we make plenty use uf a NULL
ctx in drm_modeset_lock.c. How exactly does this blow up? Can you attach
the splat please?

> > > > The other wonky bit is that changing reservations on multiple objects
> > > > without the full ww mutex dance is deadlock-risky. But only when you both
> > > > add and wait/stall on fences.
> > > 
> > > Note that it is only so when trying to lock multiple objects simultaneously,
> > > which we do not require.
> > 
> > With deadlocks I didn't mean locking deadlocks, but loops in the fences.
> > Sure we eventually recover when you have a good gpu driver with hang
> > check, but other drivers might be less fortunate. And if you lock objects
> > individually and update their fences individually it's fairly simple to
> > race command submission (on different drivers) against each another with
> > objects listed in reverse and end up with a ABBA fence depency.
> 
> The dependency graph has to be acyclic, or else it is unresolvable. To
> enforce that here just requires the fences to be collected before the
> request is added to the exposed dma-bufs. You don't need to lock all the
> incoming fences at once just to add them to the list.
> 
> This patch address exposing our request on the dma-buf after all the
> fences have been added to collection.

Hm right ... now I wonder how I came up with that notion again.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux