Re: [PATCH 3/3] drm/i915: Wait for the previous RCU grace period, not request completion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/09/2018 17:40, Chris Wilson wrote:
Under mempressure, our goal is to allow ourselves sufficient time to
reclaim the RCU protected slabs without overly penalizing our clients.
Currently, we use a 1 jiffie wait if the client is still active as a
means of throttling the allocations, but we can instead wait for the end
of the RCU grace period of the clients previous allocation.

Why did you opt for three patches changing the same code and just squash to last?

Regards,

Tvrtko

Suggested-by: Daniel Vetter <daniel.vetter@xxxxxxxx>
Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx>
Cc: Daniel Vetter <daniel.vetter@xxxxxxxx>
---
  drivers/gpu/drm/i915/i915_request.c | 14 ++++++--------
  drivers/gpu/drm/i915/i915_request.h |  8 ++++++++
  2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 72bcb4ca0c45..a492385b2089 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -732,17 +732,13 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
  	rq = kmem_cache_alloc(i915->requests,
  			      GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
  	if (unlikely(!rq)) {
+		i915_retire_requests(i915);
+
  		/* Ratelimit ourselves to prevent oom from malicious clients */
  		rq = i915_gem_active_raw(&ce->ring->timeline->last_request,
  					 &i915->drm.struct_mutex);
-		if (rq && i915_request_wait(rq,
-					    I915_WAIT_LOCKED |
-					    I915_WAIT_INTERRUPTIBLE,
-					    1) == -EINTR) {
-			ret = -EINTR;
-			goto err_unreserve;
-		}
-		i915_retire_requests(i915);
+		if (rq)
+			cond_synchronize_rcu(rq->rcustate);
/*
  		 * We've forced the client to stall and catch up with whatever
@@ -762,6 +758,8 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
  		}
  	}
+ rq->rcustate = get_state_synchronize_rcu();
+
  	INIT_LIST_HEAD(&rq->active_list);
  	rq->i915 = i915;
  	rq->engine = engine;
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index 9898301ab7ef..7fa94b024968 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -100,6 +100,14 @@ struct i915_request {
  	struct i915_timeline *timeline;
  	struct intel_signal_node signaling;
+ /*
+	 * The rcu epoch of when this request was allocated. Used to judiciously
+	 * apply backpressure on future allocations to ensure that under
+	 * mempressure there is sufficient RCU ticks for us to reclaim our
+	 * RCU protected slabs.
+	 */
+	unsigned long rcustate;
+
  	/*
  	 * Fences for the various phases in the request's lifetime.
  	 *

_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux