On Wed, 27 Sep 2017 11:30:34 +0200, Sagar Arun Kamble
<sagar.a.kamble@xxxxxxxxx> wrote:
Currently GPU is reset at the end of suspend via i915_gem_sanitize.
On resume, GuC will not be loaded until intel_uc_init_hw happens
during GEM resume flow but action to exit sleep can be sent to GuC
considering the FW load status. To make sure we don't invoke that
action update GuC FW load status at the end of GPU reset as NONE.
v2: Rebase.
v3: Removed intel_guc_sanitize. Marking load status as NONE at the
GPU reset point. (Chris/Michal)
Hmm, I'm not sure that touching guc private member from the outside
of guc/uc code is a good idea. Maybe we should keep call
intel_uc_sanitize()
but call it from i915_gem_reset() as that place looks more appropriate?
Michal
Signed-off-by: Sagar Arun Kamble <sagar.a.kamble@xxxxxxxxx>
Cc: Michal Wajdeczko <michal.wajdeczko@xxxxxxxxx>
Cc: Michał Winiarski <michal.winiarski@xxxxxxxxx>
Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
---
drivers/gpu/drm/i915/intel_uncore.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/gpu/drm/i915/intel_uncore.c
b/drivers/gpu/drm/i915/intel_uncore.c
index b3c3f94..83300f3 100644
--- a/drivers/gpu/drm/i915/intel_uncore.c
+++ b/drivers/gpu/drm/i915/intel_uncore.c
@@ -1763,6 +1763,16 @@ int intel_gpu_reset(struct drm_i915_private
*dev_priv, unsigned engine_mask)
}
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
+ /*
+ * FIXME: intel_uc_resume currently depends on load_status to resume
+ * GuC. Since we are resetting Full GPU at the end of suspend, let us
+ * mark the load status as NONE. Once intel_uc_resume is updated to
take
+ * into consideration GuC load state based on WOPCM, we can skip this
+ * state change.
+ */
+ if (engine_mask == ALL_ENGINES)
+ dev_priv->guc.fw.load_status = INTEL_UC_FIRMWARE_NONE;
+
return ret;
}
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx