[PATCH 2/4] drm/scheduler: add counter for total jobs in scheduler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah, I've actually added one before pushing it to amd-staging-drm-next.

But thanks for the reminder, wanted to note that to Nayan as well :)

Christian.

Am 01.08.2018 um 15:15 schrieb Huang Rui:
> On Wed, Aug 01, 2018 at 01:50:00PM +0530, Nayan Deshmukh wrote:
>
> This should need a commmit message.
>
> Thanks,
> Ray
>
>> Signed-off-by: Nayan Deshmukh <nayan26deshmukh at gmail.com>
>> ---
>>   drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++
>>   include/drm/gpu_scheduler.h               | 2 ++
>>   2 files changed, 5 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>> index a3eacc35cf98..375f6f7f6a93 100644
>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>> @@ -549,6 +549,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
>>   
>>   	trace_drm_sched_job(sched_job, entity);
>>   
>> +	atomic_inc(&entity->rq->sched->num_jobs);
>>   	first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);
>>   
>>   	/* first job wakes up scheduler */
>> @@ -836,6 +837,7 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb)
>>   
>>   	dma_fence_get(&s_fence->finished);
>>   	atomic_dec(&sched->hw_rq_count);
>> +	atomic_dec(&sched->num_jobs);
>>   	drm_sched_fence_finished(s_fence);
>>   
>>   	trace_drm_sched_process_job(s_fence);
>> @@ -953,6 +955,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
>>   	INIT_LIST_HEAD(&sched->ring_mirror_list);
>>   	spin_lock_init(&sched->job_list_lock);
>>   	atomic_set(&sched->hw_rq_count, 0);
>> +	atomic_set(&sched->num_jobs, 0);
>>   	atomic64_set(&sched->job_id_count, 0);
>>   
>>   	/* Each scheduler will run on a seperate kernel thread */
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index a60896222a3e..89881ce974a5 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -260,6 +260,7 @@ struct drm_sched_backend_ops {
>>    * @job_list_lock: lock to protect the ring_mirror_list.
>>    * @hang_limit: once the hangs by a job crosses this limit then it is marked
>>    *              guilty and it will be considered for scheduling further.
>> + * @num_jobs: the number of jobs in queue in the scheduler
>>    *
>>    * One scheduler is implemented for each hardware ring.
>>    */
>> @@ -277,6 +278,7 @@ struct drm_gpu_scheduler {
>>   	struct list_head		ring_mirror_list;
>>   	spinlock_t			job_list_lock;
>>   	int				hang_limit;
>> +	atomic_t                        num_jobs;
>>   };
>>   
>>   int drm_sched_init(struct drm_gpu_scheduler *sched,
>> -- 
>> 2.14.3
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx at lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux