On Thu, Nov 21, 2019 at 12:08 PM Rafael J. Wysocki <rafael@xxxxxxxxxx> wrote: > > On Thu, Nov 21, 2019 at 12:03 PM Rafael J. Wysocki <rafael@xxxxxxxxxx> wrote: > > > > On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg > > <mika.westerberg@xxxxxxxxx> wrote: > > > > > > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote: > > > > with the branch and patch applied: > > > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt > > > > > > Thanks for testing. Too bad it did not help :( I suppose there is no > > > change if you increase the delay to say 1s? > > > > Well, look at the original patch in this thread. > > > > What it does is to prevent the device (GPU in this particular case) > > from going into a PCI low-power state before invoking AML to power it > > down (the AML is still invoked after this patch AFAICS), so why would > > that have anything to do with the delays? > > > > The only reason would be the AML running too early, but that doesn't > > seem likely. IMO more likely is that the AML does something which > > cannot be done to a device in a PCI low-power state. > > BTW, I'm wondering if anyone has tried to skip the AML instead of > skipping the PCI PM in this case (as of 5.4-rc that would be a similar > patch to skip the invocations of > __pci_start/complete_power_transition() in pci_set_power_state() for > the affected device). Moving the dev->broken_nv_runpm test into pci_platform_power_transition() (also for transitions into D0) would be sufficient for that test if I'm not mistaken.