Re: [RFC PATCH] PCIe: Add PCIe runtime D3cold support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 19, 2012 at 4:51 AM, Rafael J. Wysocki <rjw@xxxxxxx> wrote:
> On Wednesday, April 18, 2012, huang ying wrote:
>> On Wed, Apr 18, 2012 at 5:10 AM, Rafael J. Wysocki <rjw@xxxxxxx> wrote:
>> > On Tuesday, April 17, 2012, huang ying wrote:
>> >> On Tue, Apr 17, 2012 at 5:30 AM, Rafael J. Wysocki <rjw@xxxxxxx> wrote:
>> >> >> >> +     return 0;
>> >> >> >> +}
>> >> >> >> +
>> >> >> >> +static int pcie_port_runtime_resume(struct device *dev)
>> >> >> >> +{
>> >> >> >> +     struct pci_dev *pdev = to_pci_dev(dev);
>> >> >> >> +
>> >> >> >> +     pci_restore_state(pdev);
>> >> >> >> +     if (pdev->runtime_d3cold)
>> >> >> >> +             msleep(100);
>> >> >> >
>> >> >> > What's _that_ supposed to do?
>> >> >>
>> >> >> When resume from d3cold, PCIe main link will be powered on again, it
>> >> >> will take quite some time before the main link go into L0 state.
>> >> >> Otherwise, accessing devices under the port may return wrong result.
>> >> >
>> >> > OK, but this is generic code and as per the standard the link should have been
>> >> > reestablished at this point already.
>> >> >
>> >> > Please don't put some nonstandard-platform-specific quirks like this into
>> >> > code that's supposed to handle _every_ PCIe system.
>> >>
>> >> After checking PCIe spec, I found that the 100ms here has its standard origin :)
>> >>
>> >> In PCI Express Base Specification Revision 2.0:
>> >>
>> >> Section 6.6.1 Conventional Reset
>> >>
>> >> "
>> >> To allow components to perform internal initialization, system
>> >> software must wait for at least
>> >> 100 ms from the end of a Conventional Reset of one or more devices
>> >> before it is permitted to
>> >> issue Configuration Requests to those devices
>> >> "
>> >>
>> >> But I think we should move the 100ms delay here to PCIe bus code or
>> >> PCIe/ACPI code, because that is needed by all PCIe devices for D3cold
>> >> support.
>> >
>> > I think it should be sufficient to wait for the PME message to arrive at
>> > the root port (which will cause the PME interrupt to appear), at which
>> > point the device that sent it should be able to receive configuration
>> > requests.
>>
>> For remote wake up, it is sufficient.  But for host wake up, we still
>> need to wait 100ms.
>
> Yes, we do.
>
>> > At this point, I need to konw what exactly happens when the GPE is triggered
>> > by WAKE#.
>>
>> - Lxx handler will be executed
>> - in Lxx handler, Notify the ACPI handle PCIe port
>> - Linux has registered a handler for the ACPI handle of PCIe port, in
>> the handler, turn on _PR0 and execute _PS0, which will power on the
>> link.
>
> But the handler we have is not the handler we want here.
>
> In fact, there are two handlers, pci_acpi_wake_bus() and pci_acpi_wake_dev()
> and they only do useful things for ACPI_NOTIFY_DEVICE_WAKE.  Is that the
> event type we receive from that _Lxx?

I check the DSDT, in _Lxx, there is

Notify (\_SB.PCI0.RP03, 0x02)

That is, the event type is ACPI_NOTIFY_DEVICE_WAKE.

> Even if so, these routines don't seem to be suitable to handle the case at hand.

Yes.  Maybe add a flag named like "come_from_d3cold", and if
come_from_d3cold == true, resume the dev itself without checking pme
bits, because the PCIe main link is not available now.

Best Regards,
Huang Ying
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux