Re: [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2023-11-24 03:04, Christian König wrote:
> Am 24.11.23 um 06:27 schrieb Luben Tuikov:
>> Reverse run-queue priority enumeration such that the higest priority is now 0,
>> and for each consecutive integer the prioirty diminishes.
>>
>> Run-queues correspond to priorities. To an external observer a scheduler
>> created with a single run-queue, and another created with
>> DRM_SCHED_PRIORITY_COUNT number of run-queues, should always schedule
>> sched->sched_rq[0] with the same "priority", as that index run-queue exists in
>> both schedulers, i.e. a scheduler with one run-queue or many. This patch makes
>> it so.
>>
>> In other words, the "priority" of sched->sched_rq[n], n >= 0, is the same for
>> any scheduler created with any allowable number of run-queues (priorities), 0
>> to DRM_SCHED_PRIORITY_COUNT.
>>
>> Cc: Rob Clark <robdclark@xxxxxxxxx>
>> Cc: Abhinav Kumar <quic_abhinavk@xxxxxxxxxxx>
>> Cc: Dmitry Baryshkov <dmitry.baryshkov@xxxxxxxxxx>
>> Cc: Danilo Krummrich <dakr@xxxxxxxxxx>
>> Cc: Alex Deucher <alexander.deucher@xxxxxxx>
>> Cc: Christian König <christian.koenig@xxxxxxx>
>> Cc: linux-arm-msm@xxxxxxxxxxxxxxx
>> Cc: freedreno@xxxxxxxxxxxxxxxxxxxxx
>> Cc: dri-devel@xxxxxxxxxxxxxxxxxxxxx
>> Signed-off-by: Luben Tuikov <ltuikov89@xxxxxxxxx>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  2 +-
>>   drivers/gpu/drm/msm/msm_gpu.h            |  2 +-
>>   drivers/gpu/drm/scheduler/sched_entity.c |  7 ++++---
>>   drivers/gpu/drm/scheduler/sched_main.c   | 15 +++++++--------
>>   include/drm/gpu_scheduler.h              |  6 +++---
>>   5 files changed, 16 insertions(+), 16 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> index 1a25931607c514..71a5cf37b472d4 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> @@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched)
>>   	int i;
>>   
>>   	/* Signal all jobs not yet scheduled */
>> -	for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) {
>> +	for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) {
>>   		struct drm_sched_rq *rq = sched->sched_rq[i];
>>   		spin_lock(&rq->lock);
>>   		list_for_each_entry(s_entity, &rq->entities, list) {
>> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
>> index eb0c97433e5f8a..2bfcb222e35338 100644
>> --- a/drivers/gpu/drm/msm/msm_gpu.h
>> +++ b/drivers/gpu/drm/msm/msm_gpu.h
>> @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr {
>>    * DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some
>>    * cases, so we don't use it (no need for kernel generated jobs).
>>    */
>> -#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_LOW)
>> +#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH)
>>   
>>   /**
>>    * struct msm_file_private - per-drm_file context
>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
>> index cb7445be3cbb4e..6e2b02e45e3a32 100644
>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>> @@ -81,14 +81,15 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
>>   		 */
>>   		pr_warn("%s: called with uninitialized scheduler\n", __func__);
>>   	} else if (num_sched_list) {
>> -		/* The "priority" of an entity cannot exceed the number
>> -		 * of run-queues of a scheduler.
>> +		/* The "priority" of an entity cannot exceed the number of
>> +		 * run-queues of a scheduler. Choose the lowest priority
>> +		 * available.
>>   		 */
>>   		if (entity->priority >= sched_list[0]->num_rqs) {
>>   			drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n",
>>   				entity->priority, sched_list[0]->num_rqs);
>>   			entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1,
>> -						 (s32) DRM_SCHED_PRIORITY_LOW);
>> +						 (s32) DRM_SCHED_PRIORITY_KERNEL);
> 
> That seems to be a no-op. You basically say max_T(.., num_rqs - 1, 0), 
> this will always be num_rqs - 1

This protects against num_rqs being equal to 0, in which case we select KERNEL (0).

This comes from "[PATCH] drm/sched: Fix bounds limiting when given a malformed entity"
which I sent yesterday (Message-ID: <20231123122422.167832-2-ltuikov89@xxxxxxxxx>).

Could you R-B that patch too?

> 
> Apart from that looks good to me.

Okay, could you R-B this patch then.
-- 
Regards,
Luben

Attachment: OpenPGP_0x4C15479431A334AF.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux