On Thu, Jun 3, 2021 at 4:09 PM Matthew Brost <matthew.brost@xxxxxxxxx> wrote: > > Rather than touching schedule state in the generic PM code, reset the > priolist allocation when empty in the submission code. Add a wrapper > function to do this and update the backends to call it in the correct > place. Seems reasonable, I think. I'm by no means an expert but Reviewed-by: Jason Ekstrand <jason@xxxxxxxxxxxxxx> anyway. My one suggestion would be to tweak the commit message to talk about the functional change rather than the helper. Something like drm/i915: Reset sched_engine.no_priolist immediately after dequeue Typically patches which say "add a helper function" don't come with a non-trivial functional change. --Jason > Signed-off-by: Matthew Brost <matthew.brost@xxxxxxxxx> > --- > drivers/gpu/drm/i915/gt/intel_engine_pm.c | 2 -- > drivers/gpu/drm/i915/gt/intel_execlists_submission.c | 1 + > drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 2 ++ > drivers/gpu/drm/i915/i915_scheduler.h | 7 +++++++ > 4 files changed, 10 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c > index b6a00dd72808..1f07ac4e0672 100644 > --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c > +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c > @@ -280,8 +280,6 @@ static int __engine_park(struct intel_wakeref *wf) > if (engine->park) > engine->park(engine); > > - engine->sched_engine->no_priolist = false; > - > /* While gt calls i915_vma_parked(), we have to break the lock cycle */ > intel_gt_pm_put_async(engine->gt); > return 0; > diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c > index 2326a73af6d3..609753b5401a 100644 > --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c > +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c > @@ -1553,6 +1553,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine) > * interrupt for secondary ports). > */ > sched_engine->queue_priority_hint = queue_prio(sched_engine); > + i915_sched_engine_reset_on_empty(sched_engine); > spin_unlock(&engine->active.lock); > > /* > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c > index 5d00f2e3c1de..f4a6fbfaf82e 100644 > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c > @@ -263,6 +263,8 @@ static void guc_submission_tasklet(struct tasklet_struct *t) > > __guc_dequeue(engine); > > + i915_sched_engine_reset_on_empty(engine->sched_engine); > + > spin_unlock_irqrestore(&engine->active.lock, flags); > } > > diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h > index 5bec7b3b8456..713c38c99de9 100644 > --- a/drivers/gpu/drm/i915/i915_scheduler.h > +++ b/drivers/gpu/drm/i915/i915_scheduler.h > @@ -72,6 +72,13 @@ i915_sched_engine_is_empty(struct i915_sched_engine *sched_engine) > return RB_EMPTY_ROOT(&sched_engine->queue.rb_root); > } > > +static inline void > +i915_sched_engine_reset_on_empty(struct i915_sched_engine *sched_engine) > +{ > + if (i915_sched_engine_is_empty(sched_engine)) > + sched_engine->no_priolist = false; > +} > + > void i915_request_show_with_schedule(struct drm_printer *m, > const struct i915_request *rq, > const char *prefix, > -- > 2.28.0 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx