The AER driver has historically avoided reading the configuration space of
an endpoint or RCiEP that reported a fatal error, considering the link to
that device unreliable. Consequently, when a fatal error occurs, the AER
and DPC drivers do not report specific error types, resulting in logs like:
pcieport 0000:30:03.0: EDR: EDR event received
pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
pcieport 0000:30:03.0: DPC: ERR_FATAL detected
pcieport 0000:30:03.0: AER: broadcast error_detected message
nvme nvme0: frozen state error detected, reset controller
nvme 0000:34:00.0: ready 0ms after DPC
pcieport 0000:30:03.0: AER: broadcast slot_reset message
AER status registers are sticky and Write-1-to-clear. If the link recovered
after hot reset, we can still safely access AER status of the error device.
In such case, report fatal errors which helps to figure out the error root
case.
After this patch, the logs like:
pcieport 0000:30:03.0: EDR: EDR event received
pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
pcieport 0000:30:03.0: DPC: ERR_FATAL detected
pcieport 0000:30:03.0: AER: broadcast error_detected message
nvme nvme0: frozen state error detected, reset controller
pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
nvme 0000:34:00.0: ready 0ms after DPC
nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
nvme 0000:34:00.0: device [144d:a804] error status/mask=00000010/00504000
nvme 0000:34:00.0: [ 4] DLP (First)
pcieport 0000:30:03.0: AER: broadcast slot_reset message
Signed-off-by: Shuai Xue <xueshuai@xxxxxxxxxxxxxxxxx>
---
drivers/pci/pci.h | 3 ++-
drivers/pci/pcie/aer.c | 11 +++++++----
drivers/pci/pcie/dpc.c | 2 +-
drivers/pci/pcie/err.c | 9 +++++++++
4 files changed, 19 insertions(+), 6 deletions(-)
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 0866f79aec54..6f827c313639 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -504,7 +504,8 @@ struct aer_err_info {
struct pcie_tlp_log tlp; /* TLP Header */
};
-int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
+int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
+ bool link_healthy);
void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
#endif /* CONFIG_PCIEAER */
diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
index 13b8586924ea..97ec1c17b6f4 100644
--- a/drivers/pci/pcie/aer.c
+++ b/drivers/pci/pcie/aer.c
@@ -1200,12 +1200,14 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
* aer_get_device_error_info - read error status from dev and store it to info
* @dev: pointer to the device expected to have a error record
* @info: pointer to structure to store the error record
+ * @link_healthy: link is healthy or not
*
* Return 1 on success, 0 on error.
*
* Note that @info is reused among all error devices. Clear fields properly.
*/
-int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
+int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
+ bool link_healthy)
{
int type = pci_pcie_type(dev);
int aer = dev->aer_cap;
@@ -1229,7 +1231,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
} else if (type == PCI_EXP_TYPE_ROOT_PORT ||
type == PCI_EXP_TYPE_RC_EC ||
type == PCI_EXP_TYPE_DOWNSTREAM ||
- info->severity == AER_NONFATAL) {
+ info->severity == AER_NONFATAL ||
+ (info->severity == AER_FATAL && link_healthy)) {
/* Link is still healthy for IO reads */
pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
@@ -1258,11 +1261,11 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
/* Report all before handle them, not to lost records by reset etc. */
for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
- if (aer_get_device_error_info(e_info->dev[i], e_info))
+ if (aer_get_device_error_info(e_info->dev[i], e_info, false))
aer_print_error(e_info->dev[i], e_info);
}