On Thu, 29 Oct 2020 17:00:47 +0000 Steven Price <steven.price@xxxxxxx> wrote: > The mutex within the panfrost_queue_state should have the lifetime of > the queue, however it was erroneously initialised/destroyed during > panfrost_job_{open,close} which is called every time a client > opens/closes the drm node. > > Move the initialisation/destruction to panfrost_job_{init,fini} where it > belongs. > Queued to drm-misc-next. Thanks, Boris > Fixes: 1a11a88cfd9a ("drm/panfrost: Fix job timeout handling") > Signed-off-by: Steven Price <steven.price@xxxxxxx> > Reviewed-by: Boris Brezillon <boris.brezillon@xxxxxxxxxxxxx> > --- > drivers/gpu/drm/panfrost/panfrost_job.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c > index cfb431624eea..145ad37eda6a 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_job.c > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c > @@ -595,6 +595,8 @@ int panfrost_job_init(struct panfrost_device *pfdev) > } > > for (j = 0; j < NUM_JOB_SLOTS; j++) { > + mutex_init(&js->queue[j].lock); > + > js->queue[j].fence_context = dma_fence_context_alloc(1); > > ret = drm_sched_init(&js->queue[j].sched, > @@ -625,8 +627,10 @@ void panfrost_job_fini(struct panfrost_device *pfdev) > > job_write(pfdev, JOB_INT_MASK, 0); > > - for (j = 0; j < NUM_JOB_SLOTS; j++) > + for (j = 0; j < NUM_JOB_SLOTS; j++) { > drm_sched_fini(&js->queue[j].sched); > + mutex_destroy(&js->queue[j].lock); > + } > > } > > @@ -638,7 +642,6 @@ int panfrost_job_open(struct panfrost_file_priv *panfrost_priv) > int ret, i; > > for (i = 0; i < NUM_JOB_SLOTS; i++) { > - mutex_init(&js->queue[i].lock); > sched = &js->queue[i].sched; > ret = drm_sched_entity_init(&panfrost_priv->sched_entity[i], > DRM_SCHED_PRIORITY_NORMAL, &sched, > @@ -657,7 +660,6 @@ void panfrost_job_close(struct panfrost_file_priv *panfrost_priv) > > for (i = 0; i < NUM_JOB_SLOTS; i++) { > drm_sched_entity_destroy(&panfrost_priv->sched_entity[i]); > - mutex_destroy(&js->queue[i].lock); > /* Ensure any timeouts relating to this client have completed */ > flush_delayed_work(&js->queue[i].sched.work_tdr); > } _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel