On 02/05/2024 16:10, Alex Deucher wrote:
On Thu, May 2, 2024 at 1:51 AM Sharma, Shashank <shashank.sharma@xxxxxxx> wrote:
On 01/05/2024 22:44, Alex Deucher wrote:
On Fri, Apr 26, 2024 at 10:27 AM Shashank Sharma
<shashank.sharma@xxxxxxx> wrote:
From: Arvind Yadav <arvind.yadav@xxxxxxx>
This patch does the necessary changes required to
enable compute workload support using the existing
usermode queues infrastructure.
Cc: Alex Deucher <alexander.deucher@xxxxxxx>
Cc: Christian Koenig <christian.koenig@xxxxxxx>
Signed-off-by: Arvind Yadav <arvind.yadav@xxxxxxx>
Signed-off-by: Shashank Sharma <shashank.sharma@xxxxxxx>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_userqueue.c | 3 ++-
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 2 ++
drivers/gpu/drm/amd/amdgpu/mes_v11_0_userqueue.c | 10 +++++++++-
include/uapi/drm/amdgpu_drm.h | 1 +
4 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userqueue.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userqueue.c
index e516487e8db9..78d34fa7a0b9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userqueue.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userqueue.c
@@ -189,7 +189,8 @@ amdgpu_userqueue_create(struct drm_file *filp, union drm_amdgpu_userq *args)
int qid, r = 0;
/* Usermode queues are only supported for GFX/SDMA engines as of now */
- if (args->in.ip_type != AMDGPU_HW_IP_GFX && args->in.ip_type != AMDGPU_HW_IP_DMA) {
+ if (args->in.ip_type != AMDGPU_HW_IP_GFX && args->in.ip_type != AMDGPU_HW_IP_DMA
+ && args->in.ip_type != AMDGPU_HW_IP_COMPUTE) {
DRM_ERROR("Usermode queue doesn't support IP type %u\n", args->in.ip_type);
return -EINVAL;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 525bd0f4d3f7..27b86f7fe949 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -1350,6 +1350,7 @@ static int gfx_v11_0_sw_init(void *handle)
adev->gfx.mec.num_pipe_per_mec = 4;
adev->gfx.mec.num_queue_per_pipe = 4;
adev->userq_funcs[AMDGPU_HW_IP_GFX] = &userq_mes_v11_0_funcs;
+ adev->userq_funcs[AMDGPU_HW_IP_COMPUTE] = &userq_mes_v11_0_funcs;
break;
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 4):
@@ -1362,6 +1363,7 @@ static int gfx_v11_0_sw_init(void *handle)
adev->gfx.mec.num_pipe_per_mec = 4;
adev->gfx.mec.num_queue_per_pipe = 4;
adev->userq_funcs[AMDGPU_HW_IP_GFX] = &userq_mes_v11_0_funcs;
+ adev->userq_funcs[AMDGPU_HW_IP_COMPUTE] = &userq_mes_v11_0_funcs;
break;
default:
adev->gfx.me.num_me = 1;
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0_userqueue.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0_userqueue.c
index a5e270eda37b..d61d80f86003 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0_userqueue.c
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0_userqueue.c
@@ -183,7 +183,8 @@ static int mes_v11_0_userq_create_ctx_space(struct amdgpu_userq_mgr *uq_mgr,
}
/* We don't need to set other FW objects for SDMA queues */
- if (queue->queue_type == AMDGPU_HW_IP_DMA)
+ if ((queue->queue_type == AMDGPU_HW_IP_DMA) ||
+ (queue->queue_type == AMDGPU_HW_IP_COMPUTE))
return 0;
/* Shadow and GDS objects come directly from userspace */
@@ -246,6 +247,13 @@ static int mes_v11_0_userq_mqd_create(struct amdgpu_userq_mgr *uq_mgr,
userq_props->use_doorbell = true;
userq_props->doorbell_index = queue->doorbell_index;
+ if (queue->queue_type == AMDGPU_HW_IP_COMPUTE) {
+ userq_props->eop_gpu_addr = mqd_user->eop_va;
+ userq_props->hqd_pipe_priority = AMDGPU_GFX_PIPE_PRIO_NORMAL;
+ userq_props->hqd_queue_priority = AMDGPU_GFX_QUEUE_PRIORITY_MINIMUM;
+ userq_props->hqd_active = false;
+ }
+
queue->userq_prop = userq_props;
r = mqd_hw_default->init_mqd(adev, (void *)queue->mqd.cpu_ptr, userq_props);
diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index 22f56a30f7cb..676792ad3618 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -375,6 +375,7 @@ struct drm_amdgpu_userq_mqd {
* sized.
*/
__u64 csa_va;
+ __u64 eop_va;
};
Let's add a new mqd descriptor for compute since it's different from
gfx and sdma.
the only different thing is this object (vs csa and gds objects), apart
from that, the mqd is the same as they all are MES based. Am I missing
something here ?
The scheduling entity is irrelevant. The mqd is defined by the engine
itself. E.g., v11_structs.h. Gfx has one set of requirements,
compute has different ones, and SDMA has different ones. VPE and VCN
also have mqds. When we add support for them in the future, they may
have additional requirements. I want to make it clear in the
interface what additional data are required for each ring type.
Yes, this comment was also with the first understanding, so please
ignore it.
We are aligned on the IP specific MQD structures now.
Also, can we handle the eop buffer as part of the
kernel metadata for compute user queues rather than having the user
specify it?
Sure, we can do it.
Thinking about it more, I think the eop has to be in the user's GPU
virtual address space so it probably makes more sense for the user to
allocate this, but ideally we'd take an extra ref count on it while
the queue is active to avoid the user freeing it while the queue is
active, but that can probably be a future improvement.
I was also thinking if the BO is expected to be created by (VMID != 0),
so keeping it in userspace makes it aligned with other IP specific MQD
objects.
Lets keep it for userspace, but will create a separate Compute MQD object .
- Shashank
Alex
- Shashank
Alex
struct drm_amdgpu_userq_in {
--
2.43.2