Re: [PATCH v7 1/5] PCI: qcom: Add system suspend and resume support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/23/2022 12:12 AM, Bjorn Helgaas wrote:
On Thu, Sep 22, 2022 at 09:09:28PM +0530, Krishna Chaitanya Chundru wrote:
On 9/21/2022 10:26 PM, Bjorn Helgaas wrote:
[+cc Rafael, linux-pm since this is real power management magic,
beginning of thread:
https://lore.kernel.org/all/1663669347-29308-1-git-send-email-quic_krichai@xxxxxxxxxxx/
full patch since I trimmed too much of it:
https://lore.kernel.org/all/1663669347-29308-2-git-send-email-quic_krichai@xxxxxxxxxxx/]

On Wed, Sep 21, 2022 at 03:23:35PM +0530, Krishna Chaitanya Chundru wrote:
On 9/20/2022 11:46 PM, Bjorn Helgaas wrote:
On Tue, Sep 20, 2022 at 03:52:23PM +0530, Krishna chaitanya chundru wrote:
Add suspend and resume syscore ops.

Few PCIe endpoints like NVMe and WLANs are always expecting the device
to be in D0 state and the link to be active (or in l1ss) all the time
(including in S3 state).
What does this have to do with the patch?  I don't see any NVMe or
WLAN patches here.
Existing NVMe driver expecting NVMe device to be in D0 during S3 also. If we
turn off the link in
suspend, the NVMe resume path is broken as the state machine is getting
reset in the NVMe device.
Due to this, the host driver state machine and the device state machine are
going out of sync, and all NVMe commands
after resumes are getting timed out.

IIRC, Tegra is also facing this issue with NVMe.

This issue has been discussed below threads:

https://lore.kernel.org/all/Yl+6V3pWuyRYuVV8@xxxxxxxxxxxxx/T/

https://lore.kernel.org/linux-nvme/20220201165006.3074615-1-kbusch@xxxxxxxxxx/
The problem is that this commit log doesn't explain the problem and
doesn't give us anything to connect the NVMe and WLAN assumptions with
this special driver behavior.  There needs to be some explicit
property of NVMe and WLAN that the PM core or drivers like qcom can
use to tell whether the clocks can be turned off.
Not only that NVMe is expecting the device state to be always in D0.
So any PCIe drivers should not turn off the link in suspend and do
link retraining in the resume.  As this is considered a power cycle
by the NVMe device and eventually increases the wear of the NVMe
flash.
I can't quite parse this.  Are you saying that all PCI devices should
stay in D0 when the system is in S3?
Not all PCI devices  some PCI devices like NVMe. The NVMe driver is expecting the device to stay in D0 only.

We are trying to keep the device in D0 and also reduce the power
consumption when the system is in S3 by turning off clocks and phy
with this patch series.
The decision to keep a device in D0 is not up to qcom or any other PCI
controller driver.
Yes, it is the NVMe driver who is deciding to keep the device in D0. Our QCOM PCI Controller driver is trying to keep the device in the same state as the client driver is
expecting and also trying to reduce power consumption.

In qcom platform PCIe resources( clocks, phy etc..) can
released when the link is in L1ss to reduce the power
consumption. So if the link is in L1ss, release the PCIe
resources. And when the system resumes, enable the PCIe
resources if they released in the suspend path.
What's the connection with L1.x?  Links enter L1.x based on
activity and timing.  That doesn't seem like a reliable
indicator to turn PHYs off and disable clocks.
This is a Qcom PHY-specific feature (retaining the link state in
L1.x with clocks turned off).  It is possible only with the link
being in l1.x. PHY can't retain the link state in L0 with the
clocks turned off and we need to re-train the link if it's in L2
or L3. So we can support this feature only with L1.x.  That is
the reason we are taking l1.x as the trigger to turn off clocks
(in only suspend path).
This doesn't address my question.  L1.x is an ASPM feature, which
means hardware may enter or leave L1.x autonomously at any time
without software intervention.  Therefore, I don't think reading the
current state is a reliable way to decide anything.
After the link enters the L1.x it will come out only if there is
some activity on the link.  AS system is suspended and NVMe driver
is also suspended( queues will  freeze in suspend) who else can
initiate any data.
I don't think we can assume that nothing will happen to cause exit
from L1.x.  For instance, PCIe Messages for INTx signaling, LTR, OBFF,
PTM, etc., may be sent even though we think the device is idle and
there should be no link activity.

Bjorn
I don't think after the link enters into L1.x there will some activity on the link as you mentioned, except for PCIe messages like INTx/MSI/MSIX. These messages also will not come because the
client drivers like NVMe will keep their device in the lowest power mode.

The link will come out of L1.x only when there is config or memory access or some messages to trigger the interrupts from the devices. We are already making sure this access will not be there in S3.
If the link is in L0 or L0s what you said is expected but not in L1.x



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux