[AMD Official Use Only - Internal Distribution Only]
Do you miss a file which adds spm_updated to vm structure?
From: Liu, Monk <Monk.Liu@xxxxxxx>
Sent: Monday, April 20, 2020 3:32 PM To: He, Jacob <Jacob.He@xxxxxxx>; Koenig, Christian <Christian.Koenig@xxxxxxx> Cc: amd-gfx@xxxxxxxxxxxxxxxxxxxxx <amd-gfx@xxxxxxxxxxxxxxxxxxxxx> Subject: why we need to do infinite RLC_SPM register setting during VM flush Hi Jaco & Christian
As titled , check below patch:
commit 10790a09ea584cc832353a5c2a481012e5e31a13 Author: Jacob He <jacob.he@xxxxxxx> Date: Fri Feb 28 20:24:41 2020 +0800
drm/amdgpu: Update SPM_VMID with the job's vmid when application reserves the vmid
SPM access the video memory according to SPM_VMID. It should be updated with the job's vmid right before the job is scheduled. SPM_VMID is a global resource
Change-Id: Id3881908960398f87e7c95026a54ff83ff826700 Signed-off-by: Jacob He <jacob.he@xxxxxxx> Reviewed-by: Christian König <christian.koenig@xxxxxxx>
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 6e6fc8c..ba2236a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -1056,8 +1056,12 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job, struct dma_fence *fence = NULL; bool pasid_mapping_needed = false; unsigned patch_offset = 0; + bool update_spm_vmid_needed = (job->vm && (job->vm->reserved_vmid[vmhub] != NULL)); int r;
+ if (update_spm_vmid_needed && adev->gfx.rlc.funcs->update_spm_vmid) + adev->gfx.rlc.funcs->update_spm_vmid(adev, job->vmid); + if (amdgpu_vmid_had_gpu_reset(adev, id)) { gds_switch_needed = true; vm_flush_needed = true;
this update_spm_vmid() looks an completely overkill to me, we only need to do it once for its VM …
in SRIOV the register reading/writing for update_spm_vmid() is now carried by KIQ thus there is too much burden on KIQ for such unnecessary jobs …
I want to change it to only do it once per VM, like:
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 6e6fc8c..ba2236a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -1056,8 +1056,12 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job, struct dma_fence *fence = NULL; bool pasid_mapping_needed = false; unsigned patch_offset = 0; + bool update_spm_vmid_needed = (job->vm && (job->vm->reserved_vmid[vmhub] != NULL)); int r;
+ if (update_spm_vmid_needed && adev->gfx.rlc.funcs->update_spm_vmid && !vm->spm_updated) { + adev->gfx.rlc.funcs->update_spm_vmid(adev, job->vmid); + vm->spm_updated = true; + }
if (amdgpu_vmid_had_gpu_reset(adev, id)) { gds_switch_needed = true; vm_flush_needed = true;
what do you think ?
P.S.: the best way is to let GFX ring itself to do the update_spm_vmid() instead of let CPU doing it, e.g.: we put more PM4 command in VM-FLUSH packets …. But I prefer the simple way first like I demonstrated above _____________________________________ Monk Liu|GPU Virtualization Team |AMD
|
_______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx