Quoting Chris Wilson (2017-07-17 09:42:35) > @@ -503,6 +500,49 @@ static void execlists_dequeue(struct intel_engine_cs *engine) > struct i915_priolist *p = rb_entry(rb, typeof(*p), node); > struct drm_i915_gem_request *rq, *rn; > > + if (once) { > + if (port_count(&port[0]) > 1) > + goto done; > + > + if (p->priority > max(last->priotree.priority, 0)) { > + list_for_each_entry_safe_reverse(rq, rn, > + &engine->timeline->requests, > + link) { > + struct i915_priolist *p; > + > + if (i915_gem_request_completed(rq)) > + break; > + > + __i915_gem_request_unsubmit(rq); > + unwind_wa_tail(rq); Fwiw, this is the flaw in this approach. As we decide to move the request back to the execution queue, it may complete. If we try to reexecute it, its ring->tail will be less than RING_HEAD, telling the hw to execute everything after it again. Michal's approach was to use a preemptive switch to a dummy context and then once hw knew the hw wasn't executing any of the other requests, he would unsubmit them and recompute the desired order. I've yet to see another solution for a cheaper barrier between hw/sw, as otherwise we must deliberately insert a stall to do preemption. :| -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx