Quoting Mika Kuoppala (2017-09-29 07:55:45) > > @@ -533,7 +555,45 @@ static void execlists_dequeue(struct intel_engine_cs *engine) > > spin_lock_irq(&engine->timeline->lock); > > rb = execlists->first; > > GEM_BUG_ON(rb_first(&execlists->queue) != rb); > > - while (rb) { > > + if (!rb) > > + goto unlock; > > + > > + if (last) { > > + /* > > + * Don't resubmit or switch until all outstanding > > + * preemptions (lite-restore) are seen. Then we > > + * know the next preemption status we see corresponds > > + * to this ELSP update. > > + */ > > + if (port_count(&port[0]) > 1) > > + goto unlock; > > + > > + if (can_preempt(engine) && > > + rb_entry(rb, struct i915_priolist, node)->priority > > > + max(last->priotree.priority, 0)) { > > + /* > > + * Switch to our empty preempt context so > > + * the state of the GPU is known (idle). > > + */ > > + inject_preempt_context(engine); > > + execlists->preempt = true; > > + goto unlock; > > + } else { > > + if (port_count(&port[1])) > > + goto unlock; > > I am assuming that this is check for hw > availability and nothing else? Technically this check is that we use last = port[0], and only check port[0] for preemption. In theory, it would be possible to coalesce new requests onto the second port, but it's complicated. (We would need to track the possible lite-restore on the second port, i.e. whilst submitting there was a context switch, but we may never see that preemption event. Then we have the complication with priorities, where we might be coalescing a high priority request onto the second port, losing track of it.) -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx