Re: [CI 14/25] drm/i915: Manually unwind after a failed request allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 28, 2016 at 03:43:52PM +0100, Dave Gordon wrote:
> On 28/04/16 09:56, Chris Wilson wrote:
> >In the next patches, we want to move the work out of freeing the request
> >and into its retirement (so that we can free the request without
> >requiring the struct_mutex). This means that we cannot rely on
> >unreferencing the request to completely teardown the request any more
> >and so we need to manually unwind the failed allocation. In doing so, we
> >reorder the allocation in order to make the unwind simple (and ensure
> >that we don't try to unwind a partial request that may have modified
> >global state) and so we end up pushing the initial preallocation down
> >into the engine request initialisation functions where we have the
> >requisite control over the state of the request.
> >
> >Moving the initial preallocation into the engine is less than ideal: it
> >moves logic to handle a specific problem with request handling out of
> >the common code. On the other hand, it does allow those backends
> >significantly more flexibility in performing its allocations.
> >
> >Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> >Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx>
> >Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxxxxxxxx>
> >Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx>
> >Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxxxxxxxx>
> >---
> >  drivers/gpu/drm/i915/i915_gem.c         | 28 +++++++++-------------------
> >  drivers/gpu/drm/i915/intel_lrc.c        | 16 ++++++++++++++--
> >  drivers/gpu/drm/i915/intel_ringbuffer.c |  2 +-
> >  3 files changed, 24 insertions(+), 22 deletions(-)
> >
> >diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> >index dd628eae0592..d7ff5e79182f 100644
> >--- a/drivers/gpu/drm/i915/i915_gem.c
> >+++ b/drivers/gpu/drm/i915/i915_gem.c
> >@@ -2766,15 +2766,6 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
> >  	req->ctx  = ctx;
> >  	i915_gem_context_reference(req->ctx);
> >
> >-	if (i915.enable_execlists)
> >-		ret = intel_logical_ring_alloc_request_extras(req);
> >-	else
> >-		ret = intel_ring_alloc_request_extras(req);
> >-	if (ret) {
> >-		i915_gem_context_unreference(req->ctx);
> >-		goto err;
> >-	}
> >-
> >  	/*
> >  	 * Reserve space in the ring buffer for all the commands required to
> >  	 * eventually emit this request. This is to guarantee that the
> >@@ -2783,20 +2774,19 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
> >  	 * away, e.g. because a GPU scheduler has deferred it.
> >  	 */
> >  	req->reserved_space = MIN_SPACE_FOR_ADD_REQUEST;
> 
> If you move the begin() into the mechanism-specific code, you should
> logically move the above assignment with it; after all, the eventual
> add_request will itself indirect back to
> submission-mechanism-dependent code which may (will!) have to add
> different amounts of code to the ring according to the operation of
> the different mechanisms.

That's where we are going with this. We need 336 bytes for legacy and 128
bytes for execlists (today). Moving this from a common to an engine
specific value makes sense, especially as we are looking to make this more
dynamic in the future.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux