Re: [PATCH 1/2] drm/sched: add drm_sched_start_timeout helper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 9, 2018 at 2:24 AM Christian König
<ckoenig.leichtzumerken@xxxxxxxxx> wrote:
>
> Am 08.10.2018 um 16:40 schrieb Nayan Deshmukh:
> > On Mon, Oct 8, 2018 at 8:36 PM Christian König
> > <ckoenig.leichtzumerken@xxxxxxxxx> wrote:
> >> Cleanup starting the timeout a bit.
> >>
> >> Signed-off-by: Christian König <christian.koenig@xxxxxxx>
> >> ---
> >>   drivers/gpu/drm/scheduler/sched_main.c | 29 +++++++++++++++++------------
> >>   1 file changed, 17 insertions(+), 12 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> >> index 4e8505d51795..bd7d11c47202 100644
> >> --- a/drivers/gpu/drm/scheduler/sched_main.c
> >> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> >> @@ -182,6 +182,20 @@ bool drm_sched_dependency_optimized(struct dma_fence* fence,
> >>   }
> >>   EXPORT_SYMBOL(drm_sched_dependency_optimized);
> >>
> >> +/**
> >> + * drm_sched_start_timeout - start timeout for reset worker
> >> + *
> >> + * @sched: scheduler instance to start the worker for
> >> + *
> >> + * Start the timeout for the given scheduler.
> >> + */
> >> +static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched)
> >> +{
> >> +       if (sched->timeout != MAX_SCHEDULE_TIMEOUT &&
> >> +           !list_empty(&sched->ring_mirror_list))
> >> +               schedule_delayed_work(&sched->work_tdr, sched->timeout);
> >> +}
> >> +
> >>   /* job_finish is called after hw fence signaled
> >>    */
> >>   static void drm_sched_job_finish(struct work_struct *work)
> >> @@ -203,9 +217,7 @@ static void drm_sched_job_finish(struct work_struct *work)
> >>          /* remove job from ring_mirror_list */
> >>          list_del(&s_job->node);
> >>          /* queue TDR for next job */
> >> -       if (sched->timeout != MAX_SCHEDULE_TIMEOUT &&
> >> -           !list_empty(&sched->ring_mirror_list))
> >> -               schedule_delayed_work(&sched->work_tdr, sched->timeout);
> >> +       drm_sched_start_timeout(sched);
> >>          spin_unlock(&sched->job_list_lock);
> >>
> >>          dma_fence_put(&s_job->s_fence->finished);
> >> @@ -229,10 +241,7 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job)
> >>
> >>          spin_lock(&sched->job_list_lock);
> >>          list_add_tail(&s_job->node, &sched->ring_mirror_list);
> >> -       if (sched->timeout != MAX_SCHEDULE_TIMEOUT &&
> >> -           list_first_entry_or_null(&sched->ring_mirror_list,
> >> -                                    struct drm_sched_job, node) == s_job)
> >> -               schedule_delayed_work(&sched->work_tdr, sched->timeout);
> >> +       drm_sched_start_timeout(sched);
> > This is not functionally same as before. In this case we only want to
> > schedule a work item if this is the only job in the ring_mirror_list.
>
> And exactly that is the cleanup :)
>
> Once a work item is scheduled it's timer isn't modified again by
> schedule_delayed_work().
>
> So it is more common to just schedule a work item again if we
> potentially need it.
I thought that the time included for the timedout seconds should the
time spent on the hardware engine and not the time spent in the
hardware queue. In this case we will start the counter only when the
job before this job finishes executing.

I have more doubts regarding your second patch, a long mail is on its way :)

Regards,
Nayan
>
> Christian.
>
> >
> > Regards,
> > Nayan
> >>          spin_unlock(&sched->job_list_lock);
> >>   }
> >>
> >> @@ -313,11 +322,6 @@ void drm_sched_job_recovery(struct drm_gpu_scheduler *sched)
> >>          int r;
> >>
> >>          spin_lock(&sched->job_list_lock);
> >> -       s_job = list_first_entry_or_null(&sched->ring_mirror_list,
> >> -                                        struct drm_sched_job, node);
> >> -       if (s_job && sched->timeout != MAX_SCHEDULE_TIMEOUT)
> >> -               schedule_delayed_work(&sched->work_tdr, sched->timeout);
> >> -
> >>          list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) {
> >>                  struct drm_sched_fence *s_fence = s_job->s_fence;
> >>                  struct dma_fence *fence;
> >> @@ -350,6 +354,7 @@ void drm_sched_job_recovery(struct drm_gpu_scheduler *sched)
> >>                  }
> >>                  spin_lock(&sched->job_list_lock);
> >>          }
> >> +       drm_sched_start_timeout(sched);
> >>          spin_unlock(&sched->job_list_lock);
> >>   }
> >>   EXPORT_SYMBOL(drm_sched_job_recovery);
> >> --
> >> 2.14.1
> >>
> >> _______________________________________________
> >> dri-devel mailing list
> >> dri-devel@xxxxxxxxxxxxxxxxxxxxx
> >> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux