On 27/09/2022 22:36, Ceraolo Spurio, Daniele wrote:
On 9/27/2022 12:45 AM, Tvrtko Ursulin wrote:
On 27/09/2022 07:49, Andrzej Hajda wrote:
On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
On 9/26/2022 3:44 PM, Andi Shyti wrote:
Hi Andrzej,
On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
Capturing error state is time consuming (up to 350ms on DG2), so
it should
be avoided if possible. Context reset triggered by context removal
is a
good example.
With this patch multiple igt tests will not timeout and should run
faster.
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
Signed-off-by: Andrzej Hajda <andrzej.hajda@xxxxxxxxx>
fine for me:
Reviewed-by: Andi Shyti <andi.shyti@xxxxxxxxxxxxxxx>
Just to be on the safe side, can we also have the ack from any of
the GuC folks? Daniele, John?
Andi
---
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 22ba66e48a9b01..cb58029208afe1 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -4425,7 +4425,8 @@ static void guc_handle_context_reset(struct
intel_guc *guc,
trace_intel_context_reset(ce);
if (likely(!intel_context_is_banned(ce))) {
- capture_error_state(guc, ce);
+ if (!intel_context_is_exiting(ce))
+ capture_error_state(guc, ce);
I am not sure here - if we have a persistent context which caused a
GPU hang I'd expect we'd still want error capture.
What causes the reset in the affected IGTs? Always preemption timeout?
guc_context_replay(ce);
You definitely don't want to replay requests of a context that is
going away.
My intention was to just avoid error capture, but that's even better,
only condition change:
- if (likely(!intel_context_is_banned(ce))) {
+ if (likely(intel_context_is_schedulable(ce))) {
Yes that helper was intended to be used for contexts which should not
be scheduled post exit or ban.
Daniele - you say there are some misses in the GuC backend. Should
most, or even all in intel_guc_submission.c be converted to use
intel_context_is_schedulable? My idea indeed was that "ban" should be
a level up from the backends. Backend should only distinguish between
"should I run this or not", and not the reason.
I think that all of them should be updated, but I'd like Matt B to
confirm as he's more familiar with the code than me.
Right, that sounds plausible to me as well.
One thing I forgot to mention - the only place where backend can care
between "schedulable" and "banned" is when it picks the preempt timeout
for non-schedulable contexts. This is to only apply the strict 1ms to
banned (so bad or naught contexts), while the ones which are exiting
cleanly get the full preempt timeout as otherwise configured. This
solves the ugly user experience quirk where GPU resets/errors were
logged upon exit/Ctrl-C of a well behaving application (using
non-persistent contexts). Hopefully GuC can match that behaviour so
customers stay happy.
Regards,
Tvrtko