Re: [PATCH v2] drm/amdgpu: set vm_update_mode=0 as default for Sienna Cichlid in SRIOV case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 2022-10-10 um 05:01 schrieb Slivka, Danijel:
[AMD Official Use Only - General]

Thank you for the input.

Regarding function amdgpu_virt_mmio_blocked, this is not returning correct results as it is checking if read operation is successful,
which for VF MMIO access protection is always allowed.

Regarding other suggestion to set this in some capability flag,  this flag should be set after VF MMIO access protection is enabled,
after driver is amdgpu driver is loaded,  thus it would require reinitialization of vm instances, to set vm_update function to sdma instead of cpu.

I think the initialization sequence would only be relevant if you want to implement a runtime test for MMIO access. It doesn't matter if the flag is set by the driver itself based on prior knowledge about the HW and IFWI. The flag just says "this GPU doesn't allow MMIO access after initialization". My point is really, that this prior knowledge needs to be in HW or IP-version-specific code to make it maintainable and easily extensible for future HW. We should not have such HW-specific knowledge in the generic amdgpu_vm code.

That said, VMs only get created when user mode processes open the /dev/dri/* device file. This is after the initialization is complete. I don't think you'd ever be in a situation where you need to fix up a VM that was created during driver initialization.

Regards,
  Felix



Would it be better to keep this fix in amdgpu_vm_manager_init() when initially setting the sm_update_mode?

BR,
Danijel

-----Original Message-----
From: Kuehling, Felix <Felix.Kuehling@xxxxxxx>
Sent: Wednesday, October 5, 2022 6:43 PM
To: amd-gfx@xxxxxxxxxxxxxxxxxxxxx; Slivka, Danijel <Danijel.Slivka@xxxxxxx>
Subject: Re: [PATCH v2] drm/amdgpu: set vm_update_mode=0 as default for Sienna Cichlid in SRIOV case

Am 2022-10-05 um 07:03 schrieb Danijel Slivka:
CPU pagetable updates have issues with HDP flush as VF MMIO access
protection is not allowing write during sriov runtime to
mmBIF_BX_DEV0_EPF0_VF0_HDP_MEM_COHERENCY_FLUSH_CNTL

Signed-off-by: Danijel Slivka <danijel.slivka@xxxxxxx>
---
   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 4 +++-
   1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 83b0c5d86e48..32088ac0666c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2338,7 +2338,9 @@ void amdgpu_vm_manager_init(struct amdgpu_device *adev)
        */
   #ifdef CONFIG_X86_64
       if (amdgpu_vm_update_mode == -1) {
-             if (amdgpu_gmc_vram_full_visible(&adev->gmc))
+             if (amdgpu_gmc_vram_full_visible(&adev->gmc) &&
+                 !(adev->asic_type == CHIP_SIENNA_CICHLID &&
+                 amdgpu_sriov_vf(adev)))
This would need at least a code comment. But I'd prefer a more general solution that expresses that some ASICs don't allow any MMIO access under SRIOV.

I found that there is this function defined in amdgpu_virt.c/h: bool amdgpu_virt_mmio_blocked(struct amdgpu_device *adev). Would this return the correct result and could you use it here instead of a hard-coded asic_type?

Or maybe this could be added as a flag in (adev)->virt.caps and get initialized in some ASIC-specific code path.

Regards,
    Felix


                       adev->vm_manager.vm_update_mode =
                               AMDGPU_VM_USE_CPU_FOR_COMPUTE;
               else



[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux