On Wed, Mar 29, 2023 at 06:22:32PM +0530, Manivannan Sadhasivam wrote: > On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote: > > On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote: > > > +static int qcom_pcie_suspend_noirq(struct device *dev) > > > +{ > > > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > > > + int ret; > > > + > > > + /* > > > + * Set minimum bandwidth required to keep data path functional during > > > + * suspend. > > > + */ > > > + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); > > > > This isn't really the minimum bandwidth you're setting here. > > > > I think you said off list that you didn't see real impact reducing the > > bandwidth, but have you tried requesting the real minimum which would be > > kBps_to_icc(1)? > > > > Doing so works fine here with both the CRD and X13s and may result in > > some further power savings. > > > > No, we shouldn't be setting random value as the bandwidth. Reason is, these > values are computed by the bus team based on the requirement of the interconnect > paths (clock, voltage etc...) with actual PCIe Gen speeds. I don't know about > the potential implication even if it happens to work. Why would you need PCIe gen1 speed during suspend? These numbers are already somewhat random as, for example, the vendor driver is requesting 500 kBps (800 peak) during runtime, while we are now requesting five times that during suspend (the vendor driver gets a away with 0). Sure, this indicates that the interconnect driver is broken and we should indeed be using values that at least makes some sense (and eventually fix the interconnect driver). Just not sure that you need to request that much bandwidth during suspend (e.g. for just a couple of register accesses). Johan