On Wed, Aug 07, 2019 at 02:53:44AM -0700, Rafael J. Wysocki wrote: > From: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx> > > One of the modifications made by commit d916b1be94b6 ("nvme-pci: use > host managed power state for suspend") was adding a pci_save_state() > call to nvme_suspend() in order to prevent the PCI bus-level PM from > being applied to the suspended NVMe devices, but if ASPM is not > enabled for the target NVMe device, that causes its PCIe link to stay > up and the platform may not be able to get into its optimum low-power > state because of that. > > For example, if ASPM is disabled for the NVMe drive (PC401 NVMe SK > hynix 256GB) in my Dell XPS13 9380, leaving it in D0 during > suspend-to-idle prevents the SoC from reaching package idle states > deeper than PC3, which is way insufficient for system suspend. > > To address this shortcoming, make nvme_suspend() check if ASPM is > enabled for the target device and fall back to full device shutdown > and PCI bus-level PM if that is not the case. > > Fixes: d916b1be94b6 ("nvme-pci: use host managed power state for suspend") > Link: https://lore.kernel.org/linux-pm/2763495.NmdaWeg79L@kreacher/T/#t > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx> Thanks for tracking down the cause. Sounds like your earlier assumption on ASPM's involvement was spot on. > +/* > + * pcie_aspm_enabled - Return the mask of enabled ASPM link states. > + * @pci_device: Target device. > + */ > +u32 pcie_aspm_enabled(struct pci_dev *pci_device) > +{ > + struct pci_dev *bridge = pci_device->bus->self; You may want use pci_upstream_bridge() instead, just in case someone calls this on a virtual function's pci_dev. > + u32 aspm_enabled; > + > + mutex_lock(&aspm_lock); > + aspm_enabled = bridge->link_state ? bridge->link_state->aspm_enabled : 0; > + mutex_unlock(&aspm_lock); > + > + return aspm_enabled; > +}