In all likelihood, the priority and node are already in the CPU cache and by checking them first, we can avoid having to chase the *request->hwsp for the current breadcrumb. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> --- drivers/gpu/drm/i915/i915_scheduler.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c index f32d0ee6d58c..5581c5004ff0 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.c +++ b/drivers/gpu/drm/i915/i915_scheduler.c @@ -200,10 +200,10 @@ static void __i915_schedule(struct i915_sched_node *node, lockdep_assert_held(&schedule_lock); GEM_BUG_ON(prio == I915_PRIORITY_INVALID); - if (node_signaled(node)) + if (prio <= READ_ONCE(node->attr.priority)) return; - if (prio <= READ_ONCE(node->attr.priority)) + if (node_signaled(node)) return; stack.signaler = node; -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx