Quoting Tvrtko Ursulin (2021-01-07 10:16:57) > > On 06/01/2021 16:08, Chris Wilson wrote: > > Quoting Tvrtko Ursulin (2021-01-06 15:57:49) > > [snip] > > >>> @@ -1363,16 +1336,16 @@ static void execlists_dequeue(struct intel_engine_cs *engine) > >>> __unwind_incomplete_requests(engine); > >>> > >>> last = NULL; > >>> - } else if (need_timeslice(engine, last) && > >>> - timeslice_expired(execlists, last)) { > >>> + } else if (timeslice_expired(engine, last)) { > >>> ENGINE_TRACE(engine, > >>> - "expired last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n", > >>> - last->fence.context, > >>> - last->fence.seqno, > >>> - last->sched.attr.priority, > >>> + "expired:%s last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n", > >>> + yesno(timer_expired(&execlists->timer)), > >>> + last->fence.context, last->fence.seqno, > >>> + rq_prio(last), > >>> execlists->queue_priority_hint, > >>> yesno(timeslice_yield(execlists, last))); > >>> > >>> + cancel_timer(&execlists->timer); > >> > >> What is this cancel for? > > > > This branch is taken upon yielding the timeslice, but we may not submit > > a new pair of contexts, leaving the timer active (and marked as > > expired). Since the timer remains expired, we will continuously looped > > until a context switch, or some other preemption event. > > Sorry I was looking at the cancel_timer in process_csb and ended up > replying at the wrong spot. The situation there seems to be removing the > single timeslice related call (set_timeslice) and adding a cancel_timer > which is also not obvious to me what it is about. Yes, there the cancel_timer() is equivalent to the old set_timeslice(). After processing an event, we assume it is a change in context meriting a new timeslice. To start a new timeslice rather than continue the old one, we remove an existing timer and readd it for the end of the timeslice. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx