With the previous patch the state transition handling of the reset code itself is now (hopefully) race free and solid. But that still leaves out everyone else - with the various lock-free wait paths we have there's the possibility that the reset happens between the point where we read the seqno we should wait on and the actual wait. And if __wait_seqno then never sees the RESET_IN_PROGRESS state, we'll happily wait for a seqno which will in all likelyhood never signal. In practice this is not a big problem since the X server gets constantly interrupted, and can then submit more work (hopefully) to unblock everyone else: As soon as a new seqno write lands, all waiters will unblock. But running the i-g-t reset testcase ZZ_hangman can expose this race, especially on slower hw with fewer cpu cores. Now looking forward to ARB_robustness and friends that's not the best possible behaviour, hence this patch adds a reset_counter to be able to detect any reset, even if a given thread never observed the in-progress state. The important part is to correctly order things: - The write side needs to increment the counter after any seqno gets reset. Hence we need to do that at the end of the reset work, and again wake everyone up. We also need to place a barrier in between any possible seqno changes and the counter increment. - On the read side we need to ensure that no reset can sneak in and invalidate the seqno. In many cases the dev->struct_mutex ensures consistency and the unlocking provides the required memory barrier. In some completely lockless cases we need to add a barrier and be careful with the ordering. The end-result of all this is that we need to wake up everyone twice in a reset operation: - First, before the reset starts, to get any lockholders of the locks, so that the reset can proceed. - Second, after the reset is completed, to allow waiters to properly and reliably detect the reset condition and bail out. I admit that this entire reset_counter thing smells a bit like overkill, but I think it's justified since it makes it really explicit what the bail-out condition is. And we need a reset counter anyway to implement ARB_robustness, and imo with finer-grained locking on the horizont this is the most resilient scheme I could think of. Signed-off-by: Daniel Vetter <daniel.vetter at ffwll.ch> --- drivers/gpu/drm/i915/i915_drv.h | 13 +++++++++++++ drivers/gpu/drm/i915/i915_gem.c | 42 +++++++++++++++++++++++++++++++++++------ drivers/gpu/drm/i915/i915_irq.c | 16 ++++++++++++++++ 3 files changed, 65 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 423541b..b5ad2a8 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -739,6 +739,19 @@ struct i915_gpu_error { #define I915_WEDGED 2 /** + * Reset counter + * + * This is used by the wait_seqno code to race-free noticed that a reset + * event happened and that it needs to restart the entire ioctl (since + * most likely the seqno it waited for won't ever signal anytime soon). + * + * This is important for lock-free wait paths, where no contended lock + * naturally enforces the correct ordering between the bail-out of the + * waiter and the gpu reset work code. + */ + atomic_t reset_counter; + + /** * Waitqueue to signal when the reset has completed. Used by clients * that wait for dev_priv->mm.wedged to settle. */ diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 3a1f72c..73ab475 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -94,7 +94,7 @@ i915_gem_wait_for_error(struct i915_gpu_error *error) if (!atomic_read(&error->wedged)) return 0; -#define EXIT_COND (atomic_read(&error->reset_queue) != I915_RESET_IN_PROGRESS) +#define EXIT_COND (atomic_read(&error->reset_counter) != I915_RESET_IN_PROGRESS) /* * Only wait 10 seconds for the gpu reset to complete to avoid hanging * userspace. If it takes that long something really bad is going on and @@ -975,13 +975,22 @@ i915_gem_check_olr(struct intel_ring_buffer *ring, u32 seqno) * __wait_seqno - wait until execution of seqno has finished * @ring: the ring expected to report seqno * @seqno: duh! + * @reset_counter: reset sequence associated with the given seqno * @interruptible: do an interruptible wait (normally yes) * @timeout: in - how long to wait (NULL forever); out - how much time remaining * + * Note: It is of utmost importance that the passed in seqno and reset_counter + * values have been read by the caller in an smb safe manner. Where read-side + * locks are involved, it is sufficient to read the reset_counter before + * unlocking the lock that protects the seqno. For lockless tricks, the + * reset_counter _must_ be read before, and an appropriate smb_rmb must be + * inserted. + * * Returns 0 if the seqno was found within the alloted time. Else returns the * errno with remaining time filled in timeout argument. */ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno, + unsigned reset_counter, bool interruptible, struct timespec *timeout) { drm_i915_private_t *dev_priv = ring->dev->dev_private; @@ -1011,7 +1020,8 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno, #define EXIT_COND \ (i915_seqno_passed(ring->get_seqno(ring, false), seqno) || \ - atomic_read(&dev_priv->gpu_error.wedged)) + atomic_read(&dev_priv->gpu_error.wedged) || \ + (reset_counter != atomic_read(&dev_priv->gpu_error.reset_counter))) do { if (interruptible) end = wait_event_interruptible_timeout(ring->irq_queue, @@ -1021,6 +1031,11 @@ static int __wait_seqno(struct intel_ring_buffer *ring, u32 seqno, end = wait_event_timeout(ring->irq_queue, EXIT_COND, timeout_jiffies); + /* We need to check whether any gpu reset happened in between + * the caller grabbing the seqno and now. */ + if (reset_counter != atomic_read(&dev_priv->gpu_error.reset_counter)) + end = -EAGAIN; + ret = i915_gem_check_wedge(&dev_priv->gpu_error, interruptible); if (ret) end = ret; @@ -1075,7 +1090,9 @@ i915_wait_seqno(struct intel_ring_buffer *ring, uint32_t seqno) if (ret) return ret; - return __wait_seqno(ring, seqno, interruptible, NULL); + return __wait_seqno(ring, seqno, + atomic_read(&dev_priv->gpu_error.reset_counter), + interruptible, NULL); } /** @@ -1122,6 +1139,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj, struct drm_device *dev = obj->base.dev; struct drm_i915_private *dev_priv = dev->dev_private; struct intel_ring_buffer *ring = obj->ring; + unsigned reset_counter; u32 seqno; int ret; @@ -1140,8 +1158,9 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj, if (ret) return ret; + reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter); mutex_unlock(&dev->struct_mutex); - ret = __wait_seqno(ring, seqno, true, NULL); + ret = __wait_seqno(ring, seqno, reset_counter, true, NULL); mutex_lock(&dev->struct_mutex); i915_gem_retire_requests_ring(ring); @@ -2273,10 +2292,12 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj) int i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file) { + drm_i915_private_t *dev_priv = dev->dev_private; struct drm_i915_gem_wait *args = data; struct drm_i915_gem_object *obj; struct intel_ring_buffer *ring = NULL; struct timespec timeout_stack, *timeout = NULL; + unsigned reset_counter; u32 seqno = 0; int ret = 0; @@ -2317,9 +2338,10 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file) } drm_gem_object_unreference(&obj->base); + reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter); mutex_unlock(&dev->struct_mutex); - ret = __wait_seqno(ring, seqno, true, timeout); + ret = __wait_seqno(ring, seqno, reset_counter, true, timeout); if (timeout) { WARN_ON(!timespec_valid(timeout)); args->timeout_ns = timespec_to_ns(timeout); @@ -3384,6 +3406,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file) unsigned long recent_enough = jiffies - msecs_to_jiffies(20); struct drm_i915_gem_request *request; struct intel_ring_buffer *ring = NULL; + unsigned reset_counter; u32 seqno = 0; int ret; @@ -3395,6 +3418,13 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file) if (ret) return ret; + reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter); + /* + * We must read the reset_counter before reading the seqno - locks are + * only one-sided barriers! + */ + smp_rmb(); + spin_lock(&file_priv->mm.lock); list_for_each_entry(request, &file_priv->mm.request_list, client_list) { if (time_after_eq(request->emitted_jiffies, recent_enough)) @@ -3408,7 +3438,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file) if (seqno == 0) return 0; - ret = __wait_seqno(ring, seqno, true, NULL); + ret = __wait_seqno(ring, seqno, reset_counter, true, NULL); if (ret == 0) queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, 0); diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 73d46f4..87f5031 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -843,9 +843,11 @@ static void i915_error_work_func(struct work_struct *work) drm_i915_private_t *dev_priv = container_of(error, drm_i915_private_t, gpu_error); struct drm_device *dev = dev_priv->dev; + struct intel_ring_buffer *ring; char *error_event[] = { "ERROR=1", NULL }; char *reset_event[] = { "RESET=1", NULL }; char *reset_done_event[] = { "ERROR=0", NULL }; + int i; kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE, error_event); @@ -861,6 +863,20 @@ static void i915_error_work_func(struct work_struct *work) I915_WEDGED); } + /* + * After all the gem state is reset, increment the reset counter + * and wake up everyone waiting for the reset to complete. + * + * Since unlock operations are a one-sided barrier only, we need + * to insert a barrier here to order any seqno updates before + * the counter increment. + */ + smp_mb__before_atomic_inc(); + atomic_inc(&dev_priv->gpu_error.reset_counter); + + for_each_ring(ring, dev_priv, i) + wake_up_all(&ring->irq_queue); + wake_up_all(&dev_priv->gpu_error.reset_queue); } } -- 1.7.11.4