On 08/02/2018 01:50 PM, Nayan Deshmukh wrote:
On Thu, Aug 2, 2018 at 10:31 AM Zhang, Jerry (Junwei) <Jerry.Zhang@xxxxxxx <mailto:Jerry.Zhang@xxxxxxx>> wrote: On 07/12/2018 02:36 PM, Nayan Deshmukh wrote: > Signed-off-by: Nayan Deshmukh <nayan26deshmukh@xxxxxxxxx <mailto:nayan26deshmukh@xxxxxxxxx>> > --- > drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++ > include/drm/gpu_scheduler.h | 2 ++ > 2 files changed, 5 insertions(+) > > diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c > index 429b1328653a..3dc1a4f07e3f 100644 > --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c > +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c > @@ -538,6 +538,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job, > trace_drm_sched_job(sched_job, entity); > > first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); > + atomic_inc(&entity->sched->num_jobs); Shall we use hw_rq_count directly or merge them together? hw_rq_count is the number of jobs that are currently in the hardware queue as compared to num_jobs which is the number of jobs in the software queue. num_jobs provides a give a better idea of the load on a scheduler that's why I added that field and used it to decide the scheduler with the least load.
Thanks for your explanation. Then may be more reasonable to move atomic_dec(&sched->num_jobs) after drm_sched_fence_scheduled() or just before atomic_inc(&sched->hw_rq_count). How do you think that? Regards, Jerry
Regards, Nayan Regards, Jerry > > /* first job wakes up scheduler */ > if (first) { > @@ -818,6 +819,7 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) > > dma_fence_get(&s_fence->finished); > atomic_dec(&sched->hw_rq_count); > + atomic_dec(&sched->num_jobs); > drm_sched_fence_finished(s_fence); > > trace_drm_sched_process_job(s_fence); > @@ -935,6 +937,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, > INIT_LIST_HEAD(&sched->ring_mirror_list); > spin_lock_init(&sched->job_list_lock); > atomic_set(&sched->hw_rq_count, 0); > + atomic_set(&sched->num_jobs, 0); > atomic64_set(&sched->job_id_count, 0); > > /* Each scheduler will run on a seperate kernel thread */ > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > index 43e93d6077cf..605bd4ad2397 100644 > --- a/include/drm/gpu_scheduler.h > +++ b/include/drm/gpu_scheduler.h > @@ -257,6 +257,7 @@ struct drm_sched_backend_ops { > * @job_list_lock: lock to protect the ring_mirror_list. > * @hang_limit: once the hangs by a job crosses this limit then it is marked > * guilty and it will be considered for scheduling further. > + * @num_jobs: the number of jobs in queue in the scheduler > * > * One scheduler is implemented for each hardware ring. > */ > @@ -274,6 +275,7 @@ struct drm_gpu_scheduler { > struct list_head ring_mirror_list; > spinlock_t job_list_lock; > int hang_limit; > + atomic_t num_jobs; > }; > > int drm_sched_init(struct drm_gpu_scheduler *sched, >
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel