On 8/24/2021 2:07 PM, Christian König wrote:
Am 24.08.21 um 13:57 schrieb Das, Nirmoy:
Hi Christian,
On 8/24/2021 8:10 AM, Christian König wrote:
I haven't followed the previous discussion, but that looks like this
change is based on a misunderstanding.
In previous discussion I sort of suggested to have new DRM prio as I
didn't see any other way to map priority provided by the userspace to
this new 3rd hw priority.
Do you think we should use other information from userspace like
queue id to determine hardware priority ?
If I'm not completely mistaken we have dropped the concept of exposing
multiple queues/instances completely.
Yes, that is my understanding too.
What we should probably do is to use the (cleaned up) UAPI enum for
init_priority and override_priority instead of the drm scheduler enums.
I went through the drm code, now I see what you mean. So what we are now
doing is: mapping AMDGPU_CTX_PRIORITY_* to DRM_SCHED_PRIORITY_* and
then to hw priority which is not nice.
We should rather map AMDGPU_CTX_PRIORITY_* to hw priority directly.
Regards,
Nirmoy
Regards,
Christian.
Regards,
Nirmoy
Those here are the software priorities used in the scheduler, but
what you are working on are the hardware priorities.
That are two completely different things which we shouldn't mix up.
Regards,
Christian.
Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
Signed-off-by: Satyajit Sahu <satyajit.sahu@xxxxxxx>
---
include/drm/gpu_scheduler.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index d18af49fd009..d0e5e234da5f 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -40,6 +40,7 @@ enum drm_sched_priority {
DRM_SCHED_PRIORITY_MIN,
DRM_SCHED_PRIORITY_NORMAL,
DRM_SCHED_PRIORITY_HIGH,
+ DRM_SCHED_PRIORITY_VERY_HIGH,
DRM_SCHED_PRIORITY_KERNEL,
DRM_SCHED_PRIORITY_COUNT,