Re: [PATCH 07/12] drm/i915/scheduler: Boost priorities for flips

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 03, 2016 at 04:29:52PM +0000, Tvrtko Ursulin wrote:
> 
> On 02/11/2016 17:50, Chris Wilson wrote:
> >Boost the priority of any rendering required to show the next pageflip
> >as we want to avoid missing the vblank by being delayed by invisible
> >workload. We prioritise avoiding jank and jitter in the GUI over
> >starving background tasks.
> >
> >Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> >---
> > drivers/gpu/drm/i915/i915_drv.h      |  5 ++++
> > drivers/gpu/drm/i915/i915_gem.c      | 50 ++++++++++++++++++++++++++++++++++++
> > drivers/gpu/drm/i915/intel_display.c |  2 ++
> > 3 files changed, 57 insertions(+)
> >
> >diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> >index 61fee0b0c302..738ec44fe6af 100644
> >--- a/drivers/gpu/drm/i915/i915_drv.h
> >+++ b/drivers/gpu/drm/i915/i915_drv.h
> >@@ -3416,6 +3416,11 @@ int i915_gem_object_wait(struct drm_i915_gem_object *obj,
> > 			 unsigned int flags,
> > 			 long timeout,
> > 			 struct intel_rps_client *rps);
> >+int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
> >+				  unsigned int flags,
> >+				  int priority);
> >+#define I915_PRIORITY_DISPLAY I915_PRIORITY_MAX
> >+
> > int __must_check
> > i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj,
> > 				  bool write);
> >diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> >index 4697848ecfd9..4287c51fb461 100644
> >--- a/drivers/gpu/drm/i915/i915_gem.c
> >+++ b/drivers/gpu/drm/i915/i915_gem.c
> >@@ -433,6 +433,56 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
> > 	return timeout;
> > }
> >
> >+static void fence_set_priority(struct dma_fence *fence, int prio)
> >+{
> >+	struct drm_i915_gem_request *rq;
> >+	struct intel_engine_cs *engine;
> >+
> >+	if (!dma_fence_is_i915(fence))
> >+		return;
> >+
> >+	rq = to_request(fence);
> >+	engine = rq->engine;
> >+	if (!engine->schedule)
> >+		return;
> >+
> >+	engine->schedule(rq, prio);
> 
> This will be inefficient with reservation objects containing
> multiple i915 fences.

We recursively walk a list of lists, inefficiency is its middle name!

> Instead you could update just a single priority and then rebalance
> the tree at the end.
> 
> Not sure how much work that would be. Perhaps it can be improved
> later on. Or we don't expect this scenario to occur here?

I don't think it will be so bad as to be noticeable. The principle being
that much of the dependency tree of the multiple fences are likely the
same and so we end up completing the walk much earlier. What is more
worrying is that we can get non-i915 fences and not bump the i915
dependencies beneath. The most likely of these is fence-array.
fence-array at least we can extend fence_set_priority (but nouveau-i915
interdependencies will be left behind).

Anyway I don't think calling engine->schedule() on each request is as
bad the ->schedule() implementation itself.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux