On 21/05/2019 17:36, Chris Wilson wrote:
Quoting Lionel Landwerlin (2019-05-21 15:08:52)
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index f263a8374273..2ad95977f7a8 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -2085,7 +2085,7 @@ static int gen9_emit_bb_start(struct i915_request *rq,
if (IS_ERR(cs))
return PTR_ERR(cs);
- *cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
+ *cs++ = MI_ARB_ON_OFF | rq->hw_context->arb_enable;
My prediction is that this will result in this context being reset due
to preemption timeouts and the context under profile being banned.
Note that preemption timeouts will be the primary means for hang
detection for endless batches. -Chris
Thanks,
One question : how is that dealt with with compute workloads at the moment?
I though those where still not fully preemptable.
I need to rework this with a more "software" approach holding on preemption.
Adding a condition in intel_lrc.c need_preempt() looks like the right
direction?
Cheers,
-Lionel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx