On Wed, Feb 22, 2017 at 01:33:22PM +0000, Tvrtko Ursulin wrote: > > On 22/02/2017 11:46, Chris Wilson wrote: > >+void __i915_gem_request_unsubmit(struct drm_i915_gem_request *request) > >+{ > >+ struct intel_engine_cs *engine = request->engine; > >+ struct intel_timeline *timeline; > >+ > >+ assert_spin_locked(&engine->timeline->lock); > >+ > >+ /* Only unwind in reverse order, required so that the per-context list > >+ * is kept in seqno/ring order. > >+ */ > >+ GEM_BUG_ON(request->global_seqno != engine->timeline->seqno); > >+ engine->timeline->seqno--; > >+ > >+ /* We may be recursing from the signal callback of another i915 fence */ > > Copy-paste of the comment of there will really be preemption > triggered from the signal callback? I believe it may be. Say an RCS request was waiting on a BCS request, and we decide to preempt, and can do so immediately. I think being prepared for the same recursion here is predundant. > > static int __i915_sw_fence_call > > submit_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state) > > { > >@@ -1034,9 +1083,11 @@ long i915_wait_request(struct drm_i915_gem_request *req, > > if (flags & I915_WAIT_LOCKED) > > add_wait_queue(errq, &reset); > > > >- intel_wait_init(&wait, i915_gem_request_global_seqno(req)); > >+ wait.tsk = current; > > > >+restart: > > reset_wait_queue(&req->execute, &exec); > >+ wait.seqno = i915_gem_request_global_seqno(req); > > Not sure if it is worth dropping intel_wait_init, I presume to avoid > assigning the task twice? It will still be the same task so just > moving the intel_wait_init here would be clearer. I was thinking the opposite, since we are looking at wait.seqno directly elsewhere, so wanted that to be clear. And current is in a special register, so why pay the cost to reload it onto stack :) > >@@ -542,13 +549,21 @@ static int intel_breadcrumbs_signaler(void *arg) > > > > i915_gem_request_put(request); > > } else { > >+ DEFINE_WAIT(exec); > >+ > > if (kthread_should_stop()) { > > GEM_BUG_ON(request); > > break; > > } > > > >+ if (request) > >+ add_wait_queue(&request->execute, &exec); > >+ > > schedule(); > > > >+ if (request) > >+ remove_wait_queue(&request->execute, &exec); > >+ > > Not directly related but made me think why we are using > TASK_INTERRUPTIBLE in the signallers? Shouldn't it be > TASK_UNINTERRUPTIBLE and io_schedule? Sounds a bit deja vu though, > maybe we have talked about it before.. It doesn't make any difference to the signalers are they are kthreads and shouldn't be interrupted - but it does make a difference to the reported load as TASK_UNINTERRUPTIBLE contribute to system load. -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx