[RFC 30/37] drm/i915/preempt: don't allow nonbatch ctx init when the scheduler is busy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Dave Gordon <david.s.gordon@xxxxxxxxx>

If the scheduler is busy (e.g. processing a preemption) it will need to
be able to acquire the struct_mutex, so we can't allow untracked
requests to bypass the scheduler and go directly to the hardware (much
confusion will result). Since untracked requests are used only for
initialisation of logical contexts, we can avoid the problem by forcing
any thread trying to initialise a context at an unfortunate time to drop
the mutex and retry later.

For: VIZ-2021
Signed-off-by: Dave Gordon <david.s.gordon@xxxxxxxxx>
---
 drivers/gpu/drm/i915/i915_scheduler.c | 6 ++++++
 drivers/gpu/drm/i915/i915_scheduler.h | 1 +
 drivers/gpu/drm/i915/intel_lrc.c      | 8 ++++++++
 3 files changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
index 81ac88b..a037ba2 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -1778,6 +1778,12 @@ bool i915_scheduler_is_ring_preempting(struct intel_engine_cs *ring)
 	return false;
 }
 
+bool i915_scheduler_is_ring_busy(struct intel_engine_cs *ring)
+{
+	/* Currently only pre-emption ties up the scheduler. */
+	return i915_scheduler_is_ring_preempting(ring);
+}
+
 /*
  * Used by TDR to distinguish hung rings (not moving but with work to do)
  * from idle rings (not moving because there is nothing to do).
diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h
index 569215a..d5f4af3 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.h
+++ b/drivers/gpu/drm/i915/i915_scheduler.h
@@ -195,6 +195,7 @@ bool        i915_scheduler_notify_request(struct drm_i915_gem_request *req);
 void        i915_scheduler_wakeup(struct drm_device *dev);
 bool        i915_scheduler_is_ring_flying(struct intel_engine_cs *ring);
 bool        i915_scheduler_is_ring_preempting(struct intel_engine_cs *ring);
+bool        i915_scheduler_is_ring_busy(struct intel_engine_cs *ring);
 void        i915_gem_scheduler_work_handler(struct work_struct *work);
 int         i915_scheduler_flush(struct intel_engine_cs *ring, bool is_locked);
 int         i915_scheduler_flush_stamp(struct intel_engine_cs *ring,
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index b7d9fbd..1ccb50d 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -2582,6 +2582,14 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx,
 	WARN_ON(ctx->legacy_hw_ctx.rcs_state != NULL);
 	WARN_ON(ctx->engine[ring->id].state);
 
+	/* Don't submit non-scheduler requests while the scheduler is busy */
+	if (i915_scheduler_is_ring_busy(ring)) {
+		mutex_unlock(&dev->struct_mutex);
+		msleep(1);
+		mutex_lock(&dev->struct_mutex);
+		return -EAGAIN;
+	}
+
 	intel_runtime_pm_get(dev->dev_private);
 
 	context_size = round_up(intel_lr_context_size(ring), 4096);
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux