On Fri, Jul 21, 2023 at 4:18 AM Ilpo Järvinen <ilpo.jarvinen@xxxxxxxxxxxxxxx> wrote: > > On Thu, 20 Jul 2023, Bjorn Helgaas wrote: > > > On Mon, Jul 17, 2023 at 03:04:57PM +0300, Ilpo Järvinen wrote: > > > Don't assume that only the driver would be accessing LNKCTL. ASPM > > > policy changes can trigger write to LNKCTL outside of driver's control. > > > And in the case of upstream bridge, the driver does not even own the > > > device it's changing the registers for. > > > > > > Use RMW capability accessors which do proper locking to avoid losing > > > concurrent updates to the register value. > > > > > > Fixes: a2e73f56fa62 ("drm/amdgpu: Add support for CIK parts") > > > Fixes: 62a37553414a ("drm/amdgpu: add si implementation v10") > > > Suggested-by: Lukas Wunner <lukas@xxxxxxxxx> > > > Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@xxxxxxxxxxxxxxx> > > > Cc: stable@xxxxxxxxxxxxxxx > > > > Do we have any reports of problems that are fixed by this patch (or by > > others in the series)? If not, I'm not sure it really fits the usual > > stable kernel criteria: > > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/stable-kernel-rules.rst?id=v6.4 > > I was on the edge with this. The answer to your direct question is no, > there are no such reports so it would be okay to leave stable out I think. > This applies to all patches in this series. > > Basically, this series came to be after Lukas noted the potential > concurrency issues with how LNKCTL is unprotected when reviewing > (internally) my bandwidth controller series. Then I went to look around > all LNKCTL usage and realized existing things might alreary have similar > issues. > > Do you want me to send another version w/o cc stable or you'll take care > of that? > > > > --- > > > drivers/gpu/drm/amd/amdgpu/cik.c | 36 +++++++++----------------------- > > > drivers/gpu/drm/amd/amdgpu/si.c | 36 +++++++++----------------------- > > > 2 files changed, 20 insertions(+), 52 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c > > > index 5641cf05d856..e63abdf52b6c 100644 > > > --- a/drivers/gpu/drm/amd/amdgpu/cik.c > > > +++ b/drivers/gpu/drm/amd/amdgpu/cik.c > > > @@ -1574,17 +1574,8 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev) > > > u16 bridge_cfg2, gpu_cfg2; > > > u32 max_lw, current_lw, tmp; > > > > > > - pcie_capability_read_word(root, PCI_EXP_LNKCTL, > > > - &bridge_cfg); > > > - pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL, > > > - &gpu_cfg); > > > - > > > - tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; > > > - pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); > > > - > > > - tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; > > > - pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL, > > > - tmp16); > > > + pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); > > > + pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); > > > > > > tmp = RREG32_PCIE(ixPCIE_LC_STATUS1); > > > max_lw = (tmp & PCIE_LC_STATUS1__LC_DETECTED_LINK_WIDTH_MASK) >> > > > @@ -1637,21 +1628,14 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev) > > > msleep(100); > > > > > > /* linkctl */ > > > - pcie_capability_read_word(root, PCI_EXP_LNKCTL, > > > - &tmp16); > > > - tmp16 &= ~PCI_EXP_LNKCTL_HAWD; > > > - tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); > > > - pcie_capability_write_word(root, PCI_EXP_LNKCTL, > > > - tmp16); > > > - > > > - pcie_capability_read_word(adev->pdev, > > > - PCI_EXP_LNKCTL, > > > - &tmp16); > > > - tmp16 &= ~PCI_EXP_LNKCTL_HAWD; > > > - tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); > > > - pcie_capability_write_word(adev->pdev, > > > - PCI_EXP_LNKCTL, > > > - tmp16); > > > + pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL, > > > + PCI_EXP_LNKCTL_HAWD, > > > + bridge_cfg & > > > + PCI_EXP_LNKCTL_HAWD); > > > + pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL, > > > + PCI_EXP_LNKCTL_HAWD, > > > + gpu_cfg & > > > + PCI_EXP_LNKCTL_HAWD); > > > > Wow, there's a lot of pointless-looking work going on here: > > > > set root PCI_EXP_LNKCTL_HAWD > > set GPU PCI_EXP_LNKCTL_HAWD > > > > for (i = 0; i < 10; i++) { > > read root PCI_EXP_LNKCTL > > read GPU PCI_EXP_LNKCTL > > > > clear root PCI_EXP_LNKCTL_HAWD > > if (root PCI_EXP_LNKCTL_HAWD was set) > > set root PCI_EXP_LNKCTL_HAWD > > > > clear GPU PCI_EXP_LNKCTL_HAWD > > if (GPU PCI_EXP_LNKCTL_HAWD was set) > > set GPU PCI_EXP_LNKCTL_HAWD > > } > > > > If it really *is* pointless, it would be nice to clean it up, but that > > wouldn't be material for this patch, so what you have looks good. > > I really don't know if it's needed or not. There's stuff which looks hw > specific going on besides those things you point out and I've not really > understood what all that does. > > One annoying thing is that this code has been copy-pasted to appear in > almost identical form in 4 files. > > I agree it certainly looks there might be room for cleaning things up here > but such cleanups look a bit too scary to me w/o hw to test them. > > > > /* linkctl2 */ > > > pcie_capability_read_word(root, PCI_EXP_LNKCTL2, > > > > The PCI_EXP_LNKCTL2 stuff also includes RMW updates. I don't see any > > uses of PCI_EXP_LNKCTL2 outside this driver that look relevant, so I > > guess we don't care about making the PCI_EXP_LNKCTL2 updates atomic? > > Currently no, which is why I left it out from this patchset. > > It is going to change soon though as I intend to submit bandwidth > controller series after this series which will add RMW ops for LNKCTL2. > The LNKCTL2 RMW parts are now in that series rather than in this one. > > After adding the bandwidth controller, this driver might be able to use > it instead of tweaking LNKCTL2 directly to alter PCIe link speed (but I > don't expect myself to be able to test these drivers and it feels too > risky to make such a change without testing it, unfortunately). Thanks for the background. It was not clear what the point of this patch set was. This code and the similar code in radeon is just to change the link speed of the GPU. Some older platforms used default to slower link on boot so we added this code to renegotiate the link to a faster speed when the driver loaded. If you are adding core infrastructure to do that, we can switch to that. This was just the programming sequence I got from the hardware team back when this code was written. Most platforms I've seen these days come up at the max supported speed of the platform and endpoint so I don't think the code actually gets used much anymore. Taking a step back, what is the end goal of the bandwidth controller changes? The reason I ask is that today, we look at the currently negotiated speed of the link and use that for the baseline in the driver. The driver then enables PCIe dynamic power management where the system management unit on the GPU dynamically adjusts the link speed, width, and clock on demand based on the PCIe bandwidth requirements of the currently executing GPU jobs to save power. This might conflict with software if the goal is for some software component to do something similar. Alex