On Monday 17 November 2014 07:53 PM, Daniel Vetter wrote:
On Tue, Nov 18, 2014 at 12:10:51PM +0530, Deepak S wrote:
On Thursday 13 November 2014 03:58 PM, Thomas Daniel wrote:
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 906b985..f7fa0f7 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -139,8 +139,6 @@
#define GEN8_LR_CONTEXT_RENDER_SIZE (20 * PAGE_SIZE)
#define GEN8_LR_CONTEXT_OTHER_SIZE (2 * PAGE_SIZE)
-#define GEN8_LR_CONTEXT_ALIGN 4096
-
#define RING_EXECLIST_QFULL (1 << 0x2)
#define RING_EXECLIST1_VALID (1 << 0x3)
#define RING_EXECLIST0_VALID (1 << 0x4)
@@ -801,9 +799,40 @@ void intel_logical_ring_advance_and_submit(struct intel_ringbuffer *ringbuf)
execlists_context_queue(ring, ctx, ringbuf->tail);
}
+static int intel_lr_context_pin(struct intel_engine_cs *ring,
+ struct intel_context *ctx)
+{
+ struct drm_i915_gem_object *ctx_obj = ctx->engine[ring->id].state;
+ int ret = 0;
+
+ WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
With pin specific mutex from previous patch set removed.
Pardon my ignorance but I'm completely lost on this review comment here.
Deepak, can you please elaborate what kind of lock on which exact version
of the previous patch you mean? I didn't find any locking at all in the
preceeding patch here ...
Thanks, Daniel
Hi Daniel,
+static int intel_lr_context_pin(struct intel_engine_cs *ring,
+ struct intel_context *ctx)
+{
+ struct drm_i915_gem_object *ctx_obj = ctx->engine[ring->id].state;
+ int ret = 0;
+
+ mutex_lock(&ctx->engine[ring->id].unpin_lock);
+ if (ctx->engine[ring->id].unpin_count++ == 0) {
+ ret = i915_gem_obj_ggtt_pin(ctx_obj,
+ GEN8_LR_CONTEXT_ALIGN, 0);
+ if (ret)
+ ctx->engine[ring->id].unpin_count = 0;
+ }
+ mutex_unlock(&ctx->engine[ring->id].unpin_lock);
+
+ return ret;
+}
In Previous patch set we had a "mutex_lock(&ctx->engine[ring->id].unpin_lock);"
Since we "intel_lr_context_pin" is already under struct mutex, So we dont need unpin_lock. This was the change in latest patch set :)
Thanks
Deepak
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx