Re: [PATCH v4 12/14] drm/scheduler: Job timeout handler returns status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 19.01.21 um 18:47 schrieb Luben Tuikov:
On 2021-01-19 2:53 a.m., Christian König wrote:
Am 18.01.21 um 22:01 schrieb Andrey Grodzovsky:
From: Luben Tuikov <luben.tuikov@xxxxxxx>

This patch does not change current behaviour.

The driver's job timeout handler now returns
status indicating back to the DRM layer whether
the task (job) was successfully aborted or whether
more time should be given to the task to complete.

Default behaviour as of this patch, is preserved,
except in obvious-by-comment case in the Panfrost
driver, as documented below.

All drivers which make use of the
drm_sched_backend_ops' .timedout_job() callback
have been accordingly renamed and return the
would've-been default value of
DRM_TASK_STATUS_ALIVE to restart the task's
timeout timer--this is the old behaviour, and
is preserved by this patch.

In the case of the Panfrost driver, its timedout
callback correctly first checks if the job had
completed in due time and if so, it now returns
DRM_TASK_STATUS_COMPLETE to notify the DRM layer
that the task can be moved to the done list, to be
freed later. In the other two subsequent checks,
the value of DRM_TASK_STATUS_ALIVE is returned, as
per the default behaviour.

A more involved driver's solutions can be had
in subequent patches.

v2: Use enum as the status of a driver's job
      timeout callback method.

v4: (By Andrey Grodzovsky)
Replace DRM_TASK_STATUS_COMPLETE with DRM_TASK_STATUS_ENODEV
to enable a hint to the schduler for when NOT to rearm the
timeout timer.
As Lukas pointed out returning the job (or task) status doesn't make
much sense.

What we return here is the status of the scheduler.

I would either rename the enum or completely drop it and return a
negative error status.
Yes, that could be had.

Although, dropping the enum and returning [-1, 0], might
make the return status meaning vague. Using an enum with an appropriate
name, makes the intention clear to the next programmer.

Completely agree, but -ENODEV and 0 could work.

On the other hand using DRM_SCHED_* is perfectly fine with me as well.

Christian.


Now, Andrey did rename one of the enumerated values to
DRM_TASK_STATUS_ENODEV, perhaps the same but with:

enum drm_sched_status {
     DRM_SCHED_STAT_NONE, /* Reserve 0 */
     DRM_SCHED_STAT_NOMINAL,
     DRM_SCHED_STAT_ENODEV,
};

and also renaming the enum to the above would be acceptable?

Regards,
Luben

Apart from that looks fine to me,
Christian.


Cc: Alexander Deucher <Alexander.Deucher@xxxxxxx>
Cc: Andrey Grodzovsky <Andrey.Grodzovsky@xxxxxxx>
Cc: Christian König <christian.koenig@xxxxxxx>
Cc: Daniel Vetter <daniel.vetter@xxxxxxxx>
Cc: Lucas Stach <l.stach@xxxxxxxxxxxxxx>
Cc: Russell King <linux+etnaviv@xxxxxxxxxxxxxxx>
Cc: Christian Gmeiner <christian.gmeiner@xxxxxxxxx>
Cc: Qiang Yu <yuq825@xxxxxxxxx>
Cc: Rob Herring <robh@xxxxxxxxxx>
Cc: Tomeu Vizoso <tomeu.vizoso@xxxxxxxxxxxxx>
Cc: Steven Price <steven.price@xxxxxxx>
Cc: Alyssa Rosenzweig <alyssa.rosenzweig@xxxxxxxxxxxxx>
Cc: Eric Anholt <eric@xxxxxxxxxx>
Reported-by: kernel test robot <lkp@xxxxxxxxx>
Signed-off-by: Luben Tuikov <luben.tuikov@xxxxxxx>
Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@xxxxxxx>
---
   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  6 ++++--
   drivers/gpu/drm/etnaviv/etnaviv_sched.c | 10 +++++++++-
   drivers/gpu/drm/lima/lima_sched.c       |  4 +++-
   drivers/gpu/drm/panfrost/panfrost_job.c |  9 ++++++---
   drivers/gpu/drm/scheduler/sched_main.c  |  4 +---
   drivers/gpu/drm/v3d/v3d_sched.c         | 32 +++++++++++++++++---------------
   include/drm/gpu_scheduler.h             | 17 ++++++++++++++---
   7 files changed, 54 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index ff48101..a111326 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -28,7 +28,7 @@
   #include "amdgpu.h"
   #include "amdgpu_trace.h"
-static void amdgpu_job_timedout(struct drm_sched_job *s_job)
+static enum drm_task_status amdgpu_job_timedout(struct drm_sched_job *s_job)
   {
   	struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
   	struct amdgpu_job *job = to_amdgpu_job(s_job);
@@ -41,7 +41,7 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
   	    amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) {
   		DRM_ERROR("ring %s timeout, but soft recovered\n",
   			  s_job->sched->name);
-		return;
+		return DRM_TASK_STATUS_ALIVE;
   	}
amdgpu_vm_get_task_info(ring->adev, job->pasid, &ti);
@@ -53,10 +53,12 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
if (amdgpu_device_should_recover_gpu(ring->adev)) {
   		amdgpu_device_gpu_recover(ring->adev, job);
+		return DRM_TASK_STATUS_ALIVE;
   	} else {
   		drm_sched_suspend_timeout(&ring->sched);
   		if (amdgpu_sriov_vf(adev))
   			adev->virt.tdr_debug = true;
+		return DRM_TASK_STATUS_ALIVE;
   	}
   }
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
index cd46c88..c495169 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
@@ -82,7 +82,8 @@ static struct dma_fence *etnaviv_sched_run_job(struct drm_sched_job *sched_job)
   	return fence;
   }
-static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
+static enum drm_task_status etnaviv_sched_timedout_job(struct drm_sched_job
+						       *sched_job)
   {
   	struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job);
   	struct etnaviv_gpu *gpu = submit->gpu;
@@ -120,9 +121,16 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
drm_sched_resubmit_jobs(&gpu->sched); + /* Tell the DRM scheduler that this task needs
+	 * more time.
+	 */
+	drm_sched_start(&gpu->sched, true);
+	return DRM_TASK_STATUS_ALIVE;
+
   out_no_timeout:
   	/* restart scheduler after GPU is usable again */
   	drm_sched_start(&gpu->sched, true);
+	return DRM_TASK_STATUS_ALIVE;
   }
static void etnaviv_sched_free_job(struct drm_sched_job *sched_job)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index 63b4c56..66d9236 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -415,7 +415,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
   	mutex_unlock(&dev->error_task_list_lock);
   }
-static void lima_sched_timedout_job(struct drm_sched_job *job)
+static enum drm_task_status lima_sched_timedout_job(struct drm_sched_job *job)
   {
   	struct lima_sched_pipe *pipe = to_lima_pipe(job->sched);
   	struct lima_sched_task *task = to_lima_task(job);
@@ -449,6 +449,8 @@ static void lima_sched_timedout_job(struct drm_sched_job *job)
drm_sched_resubmit_jobs(&pipe->base);
   	drm_sched_start(&pipe->base, true);
+
+	return DRM_TASK_STATUS_ALIVE;
   }
static void lima_sched_free_job(struct drm_sched_job *job)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index 04e6f6f..10d41ac 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -432,7 +432,8 @@ static void panfrost_scheduler_start(struct panfrost_queue_state *queue)
   	mutex_unlock(&queue->lock);
   }
-static void panfrost_job_timedout(struct drm_sched_job *sched_job)
+static enum drm_task_status panfrost_job_timedout(struct drm_sched_job
+						  *sched_job)
   {
   	struct panfrost_job *job = to_panfrost_job(sched_job);
   	struct panfrost_device *pfdev = job->pfdev;
@@ -443,7 +444,7 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
   	 * spurious. Bail out.
   	 */
   	if (dma_fence_is_signaled(job->done_fence))
-		return;
+		return DRM_TASK_STATUS_ALIVE;
dev_err(pfdev->dev, "gpu sched timeout, js=%d, config=0x%x, status=0x%x, head=0x%x, tail=0x%x, sched_job=%p",
   		js,
@@ -455,11 +456,13 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
/* Scheduler is already stopped, nothing to do. */
   	if (!panfrost_scheduler_stop(&pfdev->js->queue[js], sched_job))
-		return;
+		return DRM_TASK_STATUS_ALIVE;
/* Schedule a reset if there's no reset in progress. */
   	if (!atomic_xchg(&pfdev->reset.pending, 1))
   		schedule_work(&pfdev->reset.work);
+
+	return DRM_TASK_STATUS_ALIVE;
   }
static const struct drm_sched_backend_ops panfrost_sched_ops = {
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 92637b7..73fccc5 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -527,7 +527,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery)
   EXPORT_SYMBOL(drm_sched_start);
/**
- * drm_sched_resubmit_jobs - helper to relunch job from pending ring list
+ * drm_sched_resubmit_jobs - helper to relaunch jobs from the pending list
    *
    * @sched: scheduler instance
    *
@@ -561,8 +561,6 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched)
   		} else {
   			s_job->s_fence->parent = fence;
   		}
-
-
   	}
   }
   EXPORT_SYMBOL(drm_sched_resubmit_jobs);
diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
index 452682e..3740665e 100644
--- a/drivers/gpu/drm/v3d/v3d_sched.c
+++ b/drivers/gpu/drm/v3d/v3d_sched.c
@@ -259,7 +259,7 @@ v3d_cache_clean_job_run(struct drm_sched_job *sched_job)
   	return NULL;
   }
-static void
+static enum drm_task_status
   v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
   {
   	enum v3d_queue q;
@@ -285,6 +285,8 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
   	}
mutex_unlock(&v3d->reset_lock);
+
+	return DRM_TASK_STATUS_ALIVE;
   }
/* If the current address or return address have changed, then the GPU
@@ -292,7 +294,7 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
    * could fail if the GPU got in an infinite loop in the CL, but that
    * is pretty unlikely outside of an i-g-t testcase.
    */
-static void
+static enum drm_task_status
   v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q,
   		    u32 *timedout_ctca, u32 *timedout_ctra)
   {
@@ -304,39 +306,39 @@ v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q,
   	if (*timedout_ctca != ctca || *timedout_ctra != ctra) {
   		*timedout_ctca = ctca;
   		*timedout_ctra = ctra;
-		return;
+		return DRM_TASK_STATUS_ALIVE;
   	}
- v3d_gpu_reset_for_timeout(v3d, sched_job);
+	return v3d_gpu_reset_for_timeout(v3d, sched_job);
   }
-static void
+static enum drm_task_status
   v3d_bin_job_timedout(struct drm_sched_job *sched_job)
   {
   	struct v3d_bin_job *job = to_bin_job(sched_job);
- v3d_cl_job_timedout(sched_job, V3D_BIN,
-			    &job->timedout_ctca, &job->timedout_ctra);
+	return v3d_cl_job_timedout(sched_job, V3D_BIN,
+				   &job->timedout_ctca, &job->timedout_ctra);
   }
-static void
+static enum drm_task_status
   v3d_render_job_timedout(struct drm_sched_job *sched_job)
   {
   	struct v3d_render_job *job = to_render_job(sched_job);
- v3d_cl_job_timedout(sched_job, V3D_RENDER,
-			    &job->timedout_ctca, &job->timedout_ctra);
+	return v3d_cl_job_timedout(sched_job, V3D_RENDER,
+				   &job->timedout_ctca, &job->timedout_ctra);
   }
-static void
+static enum drm_task_status
   v3d_generic_job_timedout(struct drm_sched_job *sched_job)
   {
   	struct v3d_job *job = to_v3d_job(sched_job);
- v3d_gpu_reset_for_timeout(job->v3d, sched_job);
+	return v3d_gpu_reset_for_timeout(job->v3d, sched_job);
   }
-static void
+static enum drm_task_status
   v3d_csd_job_timedout(struct drm_sched_job *sched_job)
   {
   	struct v3d_csd_job *job = to_csd_job(sched_job);
@@ -348,10 +350,10 @@ v3d_csd_job_timedout(struct drm_sched_job *sched_job)
   	 */
   	if (job->timedout_batches != batches) {
   		job->timedout_batches = batches;
-		return;
+		return DRM_TASK_STATUS_ALIVE;
   	}
- v3d_gpu_reset_for_timeout(v3d, sched_job);
+	return v3d_gpu_reset_for_timeout(v3d, sched_job);
   }
static const struct drm_sched_backend_ops v3d_bin_sched_ops = {
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 975e8a6..3ba36bc 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -206,6 +206,11 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
   	return s_job && atomic_inc_return(&s_job->karma) > threshold;
   }
+enum drm_task_status {
+	DRM_TASK_STATUS_ENODEV,
+	DRM_TASK_STATUS_ALIVE
+};
+
   /**
    * struct drm_sched_backend_ops
    *
@@ -230,10 +235,16 @@ struct drm_sched_backend_ops {
   	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
/**
-         * @timedout_job: Called when a job has taken too long to execute,
-         * to trigger GPU recovery.
+	 * @timedout_job: Called when a job has taken too long to execute,
+	 * to trigger GPU recovery.
+	 *
+	 * Return DRM_TASK_STATUS_ALIVE, if the task (job) is healthy
+	 * and executing in the hardware, i.e. it needs more time.
+	 *
+	 * Return DRM_TASK_STATUS_ENODEV, if the task (job) has
+	 * been aborted.
   	 */
-	void (*timedout_job)(struct drm_sched_job *sched_job);
+	enum drm_task_status (*timedout_job)(struct drm_sched_job *sched_job);
/**
            * @free_job: Called once the job's finished fence has been signaled

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux