On Tue, Feb 24, 2015 at 12:58:19AM +0100, Daniel Vetter wrote: > On Thu, Feb 19, 2015 at 04:41:12PM +0000, Chris Wilson wrote: > > On Thu, Feb 19, 2015 at 06:18:55PM +0200, Mika Kuoppala wrote: > > > There are multiple players interested in the ring->request_list > > > state. Request submission can happen in kernel or user context, > > > idle worker is going through request list to free items. And then there > > > is hangcheck worker which tries to figure out if particular ring is > > > healthy by peeking at the request list among other things. And if > > > judged stuck by hangcheck, error state is colleted. Which in turns > > > needs access to ring->request_list. > > > > We have discussed this before. Hangcheck does not need the lock so long > > as it is serialised with deletion. List processing with hangcheck during > > concurrent addition is safe. > > > > For example, I expect the request locking to look like > > > > http://cgit.freedesktop.org/~ickle/linux-2.6/tree/drivers/gpu/drm/i915/i915_gem_request.c#n691 > > I think longer-term with per-engine reset and fun stuff like that we > probably want the spinlock, just to avoid too many headaches with locking > auditing. For the execbuf fastpath it should just be one more spinlock per > ioctl, so hopefully bearable. But it is not even the locking bug that breaks capture, so what's the point? -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx