Quoting Tvrtko Ursulin (2019-05-07 13:12:05) > > On 03/05/2019 12:52, Chris Wilson wrote: > > To simplify the next patch, update bump_priority and schedule to accept > > the internal i915_sched_ndoe directly and not expect a request pointer. > > > > add/remove: 0/0 grow/shrink: 2/1 up/down: 8/-15 (-7) > > Function old new delta > > i915_schedule_bump_priority 109 113 +4 > > i915_schedule 50 54 +4 > > __i915_schedule 922 907 -15 > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > --- > > drivers/gpu/drm/i915/i915_scheduler.c | 33 +++++++++++++++------------ > > 1 file changed, 18 insertions(+), 15 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c > > index 4a95cf2201a7..380cb7343a10 100644 > > --- a/drivers/gpu/drm/i915/i915_scheduler.c > > +++ b/drivers/gpu/drm/i915/i915_scheduler.c > > @@ -189,7 +189,7 @@ static void kick_submission(struct intel_engine_cs *engine, int prio) > > tasklet_hi_schedule(&engine->execlists.tasklet); > > } > > > > -static void __i915_schedule(struct i915_request *rq, > > +static void __i915_schedule(struct i915_sched_node *rq, > > Can you not use rq for sched node, but perhaps node? We use node later on. I kept with rq to try and keep the patch small, and stick to the current semantics. We could reuse node... That looks like it is semantically clean. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx