Thanks Alex. I saw that function. It gets caps. The function requires link status (PCI_EXP_LNKSTA), the value negotiated.
> I'd rather not access pci config space from the powerplay code because it doesn't work under virtualization. [HK] I will then modify amdgpu_device_get_pcie_info() to read "PCI_EXP_LINKSTA" and store that information for future use.
Best Regards,
Harish
From: Alex Deucher <alexdeucher@xxxxxxxxx>
Sent: Friday, February 1, 2019 5:09 PM To: Kasiviswanathan, Harish Cc: amd-gfx@xxxxxxxxxxxxxxxxxxxxx Subject: Re: [PATCH] drm/amd/powerplay: add override pcie parameters for Vega20 (v2) On Fri, Feb 1, 2019 at 4:17 PM Kasiviswanathan, Harish
<Harish.Kasiviswanathan@xxxxxxx> wrote: > > v2: Use PCIe link status instead of link capability > Send override message after SMU enable features > > Change-Id: Iea2a1ac595cf63a9528ff1e9a6955d14d8c3a6d5 > Signed-off-by: Harish Kasiviswanathan <Harish.Kasiviswanathan@xxxxxxx> > --- > drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c | 93 +++++++++++++++------- > 1 file changed, 64 insertions(+), 29 deletions(-) > > diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c > index da022ca..d166f8c 100644 > --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c > +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c > @@ -771,40 +771,75 @@ static int vega20_init_smc_table(struct pp_hwmgr *hwmgr) > return 0; > } > > +/* > + * Override PCIe link speed and link width for DPM Level 1. PPTable entries > + * reflect the ASIC capabilities and not the system capabilities. For e.g. > + * Vega20 board in a PCI Gen3 system. In this case, when SMU's tries to switch > + * to DPM1, it fails as system doesn't support Gen4. > + */ > static int vega20_override_pcie_parameters(struct pp_hwmgr *hwmgr) > { > struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev); > - uint32_t pcie_speed = 0, pcie_width = 0, pcie_arg; > + uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg; > int ret; > + uint16_t lnkstat; > + > + pcie_capability_read_word(adev->pdev, PCI_EXP_LNKSTA, &lnkstat); > > - if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4) > - pcie_speed = 16; > - else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) > - pcie_speed = 8; > - else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) > - pcie_speed = 5; > - else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1) > - pcie_speed = 2; > - > - if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X32) > - pcie_width = 32; > - else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16) > - pcie_width = 16; > - else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12) > - pcie_width = 12; > - else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X8) > - pcie_width = 8; > - else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X4) > + switch (lnkstat & PCI_EXP_LNKSTA_CLS) { > + case PCI_EXP_LNKSTA_CLS_2_5GB: > + pcie_gen = 0; > + break; > + case PCI_EXP_LNKSTA_CLS_5_0GB: > + pcie_gen = 1; > + break; > + case PCI_EXP_LNKSTA_CLS_8_0GB: > + pcie_gen = 2; > + break; > + case PCI_EXP_LNKSTA_CLS_16_0GB: > + pcie_gen = 3; > + break; > + default: > + pr_warn("Unknown PCI link speed %x\n", > + lnkstat & PCI_EXP_LNKSTA_CLS); > + } > + > + > + switch ((lnkstat & PCI_EXP_LNKSTA_NLW) >> PCI_EXP_LNKSTA_NLW_SHIFT) { > + case 32: > + pcie_width = 7; > + break; > + case 16: > + pcie_width = 6; > + break; > + case 12: > + pcie_width = 5; > + break; > + case 8: > pcie_width = 4; > - else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X2) > + break; > + case 4: > + pcie_width = 3; > + break; > + case 2: > pcie_width = 2; > - else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X1) > + break; > + case 1: > pcie_width = 1; > + break; > + default: > + pr_warn("Unknown PCI link width %x\n", > + lnkstat & PCI_EXP_LNKSTA_CLS); > + } > We already get the link caps for both the device and the platform in amdgpu_device_get_pcie_info(). Is that info not adequate? I'd rather not access pci config space from the powerplay code because it doesn't work under virtualization. Alex > - pcie_arg = pcie_width | (pcie_speed << 8); > - > + /* Bit 31:16: LCLK DPM level. 0 is DPM0, and 1 is DPM1 > + * Bit 15:8: PCIE GEN, 0 to 3 corresponds to GEN1 to GEN4 > + * Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32 > + */ > + smu_pcie_arg = (1 << 16) | (pcie_gen << 8) | pcie_width; > ret = smum_send_msg_to_smc_with_parameter(hwmgr, > - PPSMC_MSG_OverridePcieParameters, pcie_arg); > + PPSMC_MSG_OverridePcieParameters, smu_pcie_arg); > + > PP_ASSERT_WITH_CODE(!ret, > "[OverridePcieParameters] Attempt to override pcie params failed!", > return ret); > @@ -1611,11 +1646,6 @@ static int vega20_enable_dpm_tasks(struct pp_hwmgr *hwmgr) > "[EnableDPMTasks] Failed to initialize SMC table!", > return result); > > - result = vega20_override_pcie_parameters(hwmgr); > - PP_ASSERT_WITH_CODE(!result, > - "[EnableDPMTasks] Failed to override pcie parameters!", > - return result); > - > result = vega20_run_btc(hwmgr); > PP_ASSERT_WITH_CODE(!result, > "[EnableDPMTasks] Failed to run btc!", > @@ -1631,6 +1661,11 @@ static int vega20_enable_dpm_tasks(struct pp_hwmgr *hwmgr) > "[EnableDPMTasks] Failed to enable all smu features!", > return result); > > + result = vega20_override_pcie_parameters(hwmgr); > + PP_ASSERT_WITH_CODE(!result, > + "[EnableDPMTasks] Failed to override pcie parameters!", > + return result); > + > result = vega20_notify_smc_display_change(hwmgr); > PP_ASSERT_WITH_CODE(!result, > "[EnableDPMTasks] Failed to notify smc display change!", > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
|
_______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx