RE: why we need to do infinite RLC_SPM register setting during VM flush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi  Christian

 

 

Yes, because only pp_one_vf mode can run RGP. And according to Jacob’s comments,

only when running RGP benchmark then UMD will enable this feature, Otherwise UMD will not enable this feature.

 

Therefore, Multi-VF will never enter into this case.

 

 

Best Regards

Yintian Tao

 

From: Koenig, Christian <Christian.Koenig@xxxxxxx>
Sent: 2020
420 20:42
To: Tao, Yintian <Yintian.Tao@xxxxxxx>; Liu, Monk <Monk.Liu@xxxxxxx>; He, Jacob <Jacob.He@xxxxxxx>
Cc: amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: why we need to do infinite RLC_SPM register setting during VM flush

 

Monk needs to answer this, but I don't think that this will work.

This explanation even sounds like only one VF can use the feature at the same time, is that correct?

Regards,
Christian.

Am 20.04.20 um 14:08 schrieb Tao, Yintian:

Hi  Monk, Christian

 

 

According to the discussion with Jacob offline, UMD will only enable SPM feature when testing RGP.

And under virtualization , only pp_one_vf mode can test RGP.

Therefore, whether we can directly use MMIO to READ/WRITE register for RLC_SPM_MC_CNTL?

 

 

Best Regards

Yintian Tao

 

From: amd-gfx <amd-gfx-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of Liu, Monk
Sent: 2020
420 16:33
To: Koenig, Christian <Christian.Koenig@xxxxxxx>; He, Jacob <Jacob.He@xxxxxxx>
Cc: amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: RE: why we need to do infinite RLC_SPM register setting during VM flush

 

Christian

 

What we want to do is like:

Read reg value from RLC_SPM_MC_CNTL to tmp

Set bits:3:0 to VMID  to tmp

Write tmp to RLC_SPM_MC_CNTL

 

I didn’t find any PM4 packet on GFX9/10 can achieve above goal ….

 

 

_____________________________________

Monk Liu|GPU Virtualization Team |AMD

sig-cloud-gpu

 

From: Christian König <ckoenig.leichtzumerken@xxxxxxxxx>
Sent: Monday, April 20, 2020 4:03 PM
To: Liu, Monk <Monk.Liu@xxxxxxx>; He, Jacob <Jacob.He@xxxxxxx>; Koenig, Christian <Christian.Koenig@xxxxxxx>
Cc: amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: why we need to do infinite RLC_SPM register setting during VM flush

 

I would also prefer to update the SPM VMID register using PM4 packets instead of the current handling.

Regards,
Christian.

Am 20.04.20 um 09:50 schrieb Liu, Monk:

I just try to explain what I want to do here, no real patch formalized yet

 

_____________________________________

Monk Liu|GPU Virtualization Team |AMD

sig-cloud-gpu

 

From: He, Jacob <Jacob.He@xxxxxxx>
Sent: Monday, April 20, 2020 3:45 PM
To: Liu, Monk <Monk.Liu@xxxxxxx>; Koenig, Christian <Christian.Koenig@xxxxxxx>
Cc: amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: why we need to do infinite RLC_SPM register setting during VM flush

 

[AMD Official Use Only - Internal Distribution Only]

 

Do you miss a file which adds spm_updated to vm structure?


From: Liu, Monk <Monk.Liu@xxxxxxx>
Sent: Monday, April 20, 2020 3:32 PM
To: He, Jacob <Jacob.He@xxxxxxx>; Koenig, Christian <Christian.Koenig@xxxxxxx>
Cc: amd-gfx@xxxxxxxxxxxxxxxxxxxxx <amd-gfx@xxxxxxxxxxxxxxxxxxxxx>
Subject: why we need to do infinite RLC_SPM register setting during VM flush

 

Hi Jaco & Christian

 

As titled , check below patch:

 

commit 10790a09ea584cc832353a5c2a481012e5e31a13

Author: Jacob He <jacob.he@xxxxxxx>

Date:   Fri Feb 28 20:24:41 2020 +0800

 

    drm/amdgpu: Update SPM_VMID with the job's vmid when application reserves the vmid

 

    SPM access the video memory according to SPM_VMID. It should be updated

    with the job's vmid right before the job is scheduled. SPM_VMID is a

    global resource

 

    Change-Id: Id3881908960398f87e7c95026a54ff83ff826700

    Signed-off-by: Jacob He <jacob.he@xxxxxxx>

    Reviewed-by: Christian König <christian.koenig@xxxxxxx>

 

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index 6e6fc8c..ba2236a 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

@@ -1056,8 +1056,12 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,

        struct dma_fence *fence = NULL;

        bool pasid_mapping_needed = false;

        unsigned patch_offset = 0;

+       bool update_spm_vmid_needed = (job->vm && (job->vm->reserved_vmid[vmhub] != NULL));

        int r;

 

+       if (update_spm_vmid_needed && adev->gfx.rlc.funcs->update_spm_vmid)

+               adev->gfx.rlc.funcs->update_spm_vmid(adev, job->vmid);

+

        if (amdgpu_vmid_had_gpu_reset(adev, id)) {

                gds_switch_needed = true;

                vm_flush_needed = true;

 

this update_spm_vmid() looks an completely overkill to me, we only need to do it once for its VM …

 

in SRIOV the register reading/writing for update_spm_vmid() is now carried by KIQ thus there is too much burden on KIQ for such unnecessary jobs …

 

I want to change it to only do it once per VM, like:

 

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index 6e6fc8c..ba2236a 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

@@ -1056,8 +1056,12 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,

        struct dma_fence *fence = NULL;

       bool pasid_mapping_needed = false;

        unsigned patch_offset = 0;

+       bool update_spm_vmid_needed = (job->vm && (job->vm->reserved_vmid[vmhub] != NULL));

        int r;

 

+       if (update_spm_vmid_needed && adev->gfx.rlc.funcs->update_spm_vmid &&  !vm->spm_updated) {

+               adev->gfx.rlc.funcs->update_spm_vmid(adev, job->vmid);

+               vm->spm_updated = true;

+       }

 

        if (amdgpu_vmid_had_gpu_reset(adev, id)) {

                gds_switch_needed = true;

                vm_flush_needed = true;

 

what do you think ?

 

P.S.: the best way is to let GFX ring itself to do the update_spm_vmid() instead of let CPU doing it, e.g.: we put more PM4 command in VM-FLUSH packets ….

But I prefer the simple way first like I demonstrated above

_____________________________________

Monk Liu|GPU Virtualization Team |AMD

sig-cloud-gpu

 





_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

 

 

_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux