On Thu, Jul 24, 2014 at 05:04:39PM +0100, Thomas Daniel wrote: > Handle all context status events in the context status buffer on every > context switch interrupt. We only remove work from the execlist queue > after a context status buffer reports that it has completed and we only > attempt to schedule new contexts on interrupt when a previously submitted > context completes (unless no contexts are queued, which means the GPU is > free). > > We canot call intel_runtime_pm_get() in an interrupt (or with a spinlock > grabbed, FWIW), because it might sleep, which is not a nice thing to do. > Instead, do the runtime_pm get/put together with the create/destroy request, > and handle the forcewake get/put directly. > > Signed-off-by: Thomas Daniel <thomas.daniel@xxxxxxxxx> > > v2: Unreferencing the context when we are freeing the request might free > the backing bo, which requires the struct_mutex to be grabbed, so defer > unreferencing and freeing to a bottom half. > > v3: > - Ack the interrupt inmediately, before trying to handle it (fix for > missing interrupts by Bob Beckett <robert.beckett@xxxxxxxxx>). > - Update the Context Status Buffer Read Pointer, just in case (spotted > by Damien Lespiau). > > v4: New namespace and multiple rebase changes. > > v5: Squash with "drm/i915/bdw: Do not call intel_runtime_pm_get() in an > interrupt", as suggested by Daniel. > > Signed-off-by: Oscar Mateo <oscar.mateo@xxxxxxxxx> One more ... > +void intel_execlists_handle_ctx_events(struct intel_engine_cs *ring) Please rename this to intel_execlist_ctx_events_irq_handler or similar for consistency with all the other irq handler functions in a follow-up patch. That kind of consistency helps a lot when reviewing the locking of irq-save spinlocks. -Daniel > +{ > + struct drm_i915_private *dev_priv = ring->dev->dev_private; > + u32 status_pointer; > + u8 read_pointer; > + u8 write_pointer; > + u32 status; > + u32 status_id; > + u32 submit_contexts = 0; > + > + status_pointer = I915_READ(RING_CONTEXT_STATUS_PTR(ring)); > + > + read_pointer = ring->next_context_status_buffer; > + write_pointer = status_pointer & 0x07; > + if (read_pointer > write_pointer) > + write_pointer += 6; > + > + spin_lock(&ring->execlist_lock); > + > + while (read_pointer < write_pointer) { > + read_pointer++; > + status = I915_READ(RING_CONTEXT_STATUS_BUF(ring) + > + (read_pointer % 6) * 8); > + status_id = I915_READ(RING_CONTEXT_STATUS_BUF(ring) + > + (read_pointer % 6) * 8 + 4); > + > + if (status & GEN8_CTX_STATUS_COMPLETE) { > + if (execlists_check_remove_request(ring, status_id)) > + submit_contexts++; > + } > + } > + > + if (submit_contexts != 0) > + execlists_context_unqueue(ring); > + > + spin_unlock(&ring->execlist_lock); > + > + WARN(submit_contexts > 2, "More than two context complete events?\n"); > + ring->next_context_status_buffer = write_pointer % 6; > + > + I915_WRITE(RING_CONTEXT_STATUS_PTR(ring), > + ((u32)ring->next_context_status_buffer & 0x07) << 8); > +} > + > +static void execlists_free_request_task(struct work_struct *work) > +{ > + struct intel_ctx_submit_request *req = > + container_of(work, struct intel_ctx_submit_request, work); > + struct drm_device *dev = req->ring->dev; > + struct drm_i915_private *dev_priv = dev->dev_private; > + > + intel_runtime_pm_put(dev_priv); > + > + mutex_lock(&dev->struct_mutex); > + i915_gem_context_unreference(req->ctx); > + mutex_unlock(&dev->struct_mutex); > + > + kfree(req); > +} > + > static int execlists_context_queue(struct intel_engine_cs *ring, > struct intel_context *to, > u32 tail) > @@ -261,6 +375,8 @@ static int execlists_context_queue(struct intel_engine_cs *ring, > i915_gem_context_reference(req->ctx); > req->ring = ring; > req->tail = tail; > + INIT_WORK(&req->work, execlists_free_request_task); > + intel_runtime_pm_get(dev_priv); > > spin_lock_irqsave(&ring->execlist_lock, flags); > > @@ -908,6 +1024,7 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin > > INIT_LIST_HEAD(&ring->execlist_queue); > spin_lock_init(&ring->execlist_lock); > + ring->next_context_status_buffer = 0; > > ret = intel_lr_context_deferred_create(dctx, ring); > if (ret) > diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h > index 14492a9..2e8929f 100644 > --- a/drivers/gpu/drm/i915/intel_lrc.h > +++ b/drivers/gpu/drm/i915/intel_lrc.h > @@ -66,6 +66,9 @@ struct intel_ctx_submit_request { > u32 tail; > > struct list_head execlist_link; > + struct work_struct work; > }; > > +void intel_execlists_handle_ctx_events(struct intel_engine_cs *ring); > + > #endif /* _INTEL_LRC_H_ */ > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h > index 6358823..905d1ba 100644 > --- a/drivers/gpu/drm/i915/intel_ringbuffer.h > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h > @@ -225,6 +225,7 @@ struct intel_engine_cs { > /* Execlists */ > spinlock_t execlist_lock; > struct list_head execlist_queue; > + u8 next_context_status_buffer; > u32 irq_keep_mask; /* bitmask for interrupts that should not be masked */ > int (*emit_request)(struct intel_ringbuffer *ringbuf); > int (*emit_flush)(struct intel_ringbuffer *ringbuf, > -- > 1.7.9.5 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > http://lists.freedesktop.org/mailman/listinfo/intel-gfx -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx