If more than two or more jobs end up timeout-ing concurrently, only one of them (the one attached to the scheduler acquiring the lock) is fully handled. The other one remains in a dangling state where it's no longer part of the scheduling queue, but still blocks something in scheduler thus leading to repetitive timeouts when new jobs are queued. Let's make sure all bad jobs are properly handled by the thread acquiring the lock. Signed-off-by: Boris Brezillon <boris.brezillon@xxxxxxxxxxxxx> Fixes: f3ba91228e8e ("drm/panfrost: Add initial panfrost driver") Cc: <stable@xxxxxxxxxxxxxxx> --- drivers/gpu/drm/panfrost/panfrost_job.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 30e7b7196dab..e87edca51d84 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -25,7 +25,7 @@ struct panfrost_queue_state { struct drm_gpu_scheduler sched; - + struct drm_sched_job *bad; u64 fence_context; u64 emit_seqno; }; @@ -392,19 +392,29 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) job_read(pfdev, JS_TAIL_LO(js)), sched_job); + /* + * Collect the bad job here so it can be processed by the thread + * acquiring the reset lock. + */ + pfdev->js->queue[js].bad = sched_job; + if (!mutex_trylock(&pfdev->reset_lock)) return; for (i = 0; i < NUM_JOB_SLOTS; i++) { struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched; - drm_sched_stop(sched, sched_job); if (js != i) /* Ensure any timeouts on other slots have finished */ cancel_delayed_work_sync(&sched->work_tdr); - } - drm_sched_increase_karma(sched_job); + drm_sched_stop(sched, pfdev->js->queue[i].bad); + + if (pfdev->js->queue[i].bad) + drm_sched_increase_karma(pfdev->js->queue[i].bad); + + pfdev->js->queue[i].bad = NULL; + } spin_lock_irqsave(&pfdev->js->job_lock, flags); for (i = 0; i < NUM_JOB_SLOTS; i++) { -- 2.26.2