In the next few patches, we will want to both copy out of the context image and write a valid image into a new context. To be completely safe, we should then couple in our domain tracking to ensure that we don't have any issues with stale data remaining in unwanted cachelines. Historically, we omitted the .write=true from the call to set-gtt-domain in i915_switch_context() in order to avoid a stall between every request as we would want to wait for the previous context write from the gpu. Since then, we limit the set-gtt-domain to only occur when we first bind the vma, so once in use we will never stall, and we are sure to flush the context following a load from swap. Equally we never applied the lessons learnt from ringbuffer submission to execlists; so time to apply the flush of the lrc after load as well. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> Acked-by: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> Reviewed-by: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/intel_lrc.c | 32 ++++++++++++++++++++++++-------- drivers/gpu/drm/i915/intel_ringbuffer.c | 6 +++--- 2 files changed, 27 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 6840ec8db037..9b4e74151ace 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1060,12 +1060,34 @@ static void execlists_schedule(struct drm_i915_gem_request *request, int prio) spin_unlock_irq(&engine->timeline->lock); } +static int __context_pin(struct i915_gem_context *ctx, struct i915_vma *vma) +{ + unsigned int flags; + int err; + + /* + * Clear this page out of any CPU caches for coherent swap-in/out. + * We only want to do this on the first bind so that we do not stall + * on an active context (which by nature is already on the GPU). + */ + if (!(vma->flags & I915_VMA_GLOBAL_BIND)) { + err = i915_gem_object_set_to_gtt_domain(vma->obj, true); + if (err) + return err; + } + + flags = PIN_GLOBAL | PIN_HIGH; + if (ctx->ggtt_offset_bias) + flags |= PIN_OFFSET_BIAS | ctx->ggtt_offset_bias; + + return i915_vma_pin(vma, 0, GEN8_LR_CONTEXT_ALIGN, flags); +} + static struct intel_ring * execlists_context_pin(struct intel_engine_cs *engine, struct i915_gem_context *ctx) { struct intel_context *ce = &ctx->engine[engine->id]; - unsigned int flags; void *vaddr; int ret; @@ -1082,11 +1104,7 @@ execlists_context_pin(struct intel_engine_cs *engine, } GEM_BUG_ON(!ce->state); - flags = PIN_GLOBAL | PIN_HIGH; - if (ctx->ggtt_offset_bias) - flags |= PIN_OFFSET_BIAS | ctx->ggtt_offset_bias; - - ret = i915_vma_pin(ce->state, 0, GEN8_LR_CONTEXT_ALIGN, flags); + ret = __context_pin(ctx, ce->state); if (ret) goto err; @@ -1106,9 +1124,7 @@ execlists_context_pin(struct intel_engine_cs *engine, ce->lrc_reg_state[CTX_RING_BUFFER_START+1] = i915_ggtt_offset(ce->ring->vma); - ce->state->obj->mm.dirty = true; ce->state->obj->pin_global++; - i915_gem_context_get(ctx); out: return ce->ring; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 47fadf8da84e..7e2a671882fb 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -1363,12 +1363,13 @@ static int context_pin(struct i915_gem_context *ctx) struct i915_vma *vma = ctx->engine[RCS].state; int ret; - /* Clear this page out of any CPU caches for coherent swap-in/out. + /* + * Clear this page out of any CPU caches for coherent swap-in/out. * We only want to do this on the first bind so that we do not stall * on an active context (which by nature is already on the GPU). */ if (!(vma->flags & I915_VMA_GLOBAL_BIND)) { - ret = i915_gem_object_set_to_gtt_domain(vma->obj, false); + ret = i915_gem_object_set_to_gtt_domain(vma->obj, true); if (ret) return ret; } @@ -1445,7 +1446,6 @@ intel_ring_context_pin(struct intel_engine_cs *engine, if (ret) goto err; - ce->state->obj->mm.dirty = true; ce->state->obj->pin_global++; } -- 2.15.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx