PME polling does not take into account that a device that is directly connected to the host bridge may go into D3cold as well. This leads to a situation where the PME poll thread reads from a config space of a device that is in D3cold and gets incorrect information because the config space is not accessible. Here is an example from Intel Ice Lake system where two PCIe root ports are in D3cold (I've instrumented the kernel to log the PMCSR register contents): [ 62.971442] pcieport 0000:00:07.1: Check PME status, PMCSR=0xffff [ 62.971504] pcieport 0000:00:07.0: Check PME status, PMCSR=0xffff Since 0xffff is interpreted so that PME is pending, the root ports will be runtime resumed. This repeats over and over again essentially blocking all runtime power management. Prevent this from happening by checking whether the device is in D3cold before its PME status is read. Fixes: 71a83bd727cc ("PCI/PM: add runtime PM support to PCIe port") Signed-off-by: Mika Westerberg <mika.westerberg@xxxxxxxxxxxxxxx> Reviewed-by: Lukas Wunner <lukas@xxxxxxxxx> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx> Cc: stable@xxxxxxxxxxxxxxx # v3.6+ --- drivers/pci/pci.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 87a1f902fa8e..720da09d4d73 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -2060,6 +2060,13 @@ static void pci_pme_list_scan(struct work_struct *work) */ if (bridge && bridge->current_state != PCI_D0) continue; + /* + * If the device is in D3cold it should not be + * polled either. + */ + if (pme_dev->dev->current_state == PCI_D3cold) + continue; + pci_pme_wakeup(pme_dev->dev, NULL); } else { list_del(&pme_dev->list); -- 2.20.1