[PATCH 3/4] drm/scheduler: add new function to get least loaded sched v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Another big question:
I agree the general idea is good to balance scheduler load for same ring family.
But, when same entity job run on different scheduler, that means the later job could be completed ahead of front, Right?
That will break fence design, later fence must be signaled after front fence in same fence context.

Anything I missed?

Regards,
David Zhou

From: dri-devel <dri-devel-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of Nayan Deshmukh
Sent: Thursday, August 02, 2018 12:07 AM
To: Grodzovsky, Andrey <Andrey.Grodzovsky at amd.com>
Cc: amd-gfx at lists.freedesktop.org; Maling list - DRI developers <dri-devel at lists.freedesktop.org>; Koenig, Christian <Christian.Koenig at amd.com>
Subject: Re: [PATCH 3/4] drm/scheduler: add new function to get least loaded sched v2

Yes, that is correct.

Nayan

On Wed, Aug 1, 2018, 9:05 PM Andrey Grodzovsky <Andrey.Grodzovsky at amd.com<mailto:Andrey.Grodzovsky at amd.com>> wrote:
Clarification question -  if the run queues belong to different
schedulers they effectively point to different rings,

it means we allow to move (reschedule) a drm_sched_entity from one ring
to another - i assume that the idea int the first place, that

you have a set of HW rings and you can utilize any of them for your jobs
(like compute rings). Correct ?

Andrey


On 08/01/2018 04:20 AM, Nayan Deshmukh wrote:
> The function selects the run queue from the rq_list with the
> least load. The load is decided by the number of jobs in a
> scheduler.
>
> v2: avoid using atomic read twice consecutively, instead store
>      it locally
>
> Signed-off-by: Nayan Deshmukh <nayan26deshmukh at gmail.com<mailto:nayan26deshmukh at gmail.com>>
> ---
>   drivers/gpu/drm/scheduler/gpu_scheduler.c | 25 +++++++++++++++++++++++++
>   1 file changed, 25 insertions(+)
>
> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> index 375f6f7f6a93..fb4e542660b0 100644
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -255,6 +255,31 @@ static bool drm_sched_entity_is_ready(struct drm_sched_entity *entity)
>       return true;
>   }
>
> +/**
> + * drm_sched_entity_get_free_sched - Get the rq from rq_list with least load
> + *
> + * @entity: scheduler entity
> + *
> + * Return the pointer to the rq with least load.
> + */
> +static struct drm_sched_rq *
> +drm_sched_entity_get_free_sched(struct drm_sched_entity *entity)
> +{
> +     struct drm_sched_rq *rq = NULL;
> +     unsigned int min_jobs = UINT_MAX, num_jobs;
> +     int i;
> +
> +     for (i = 0; i < entity->num_rq_list; ++i) {
> +             num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs);
> +             if (num_jobs < min_jobs) {
> +                     min_jobs = num_jobs;
> +                     rq = entity->rq_list[i];
> +             }
> +     }
> +
> +     return rq;
> +}
> +
>   static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
>                                   struct dma_fence_cb *cb)
>   {
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20180802/cea23d59/attachment.html>


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux