Re: [PATCH v7 12/17] cxl/pci: Add error handler for CXL PCIe Port RAS errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Terry Bowman wrote:
> Introduce correctable and uncorrectable (UCE) CXL PCIe Port Protocol Error
> handlers.
> 
> The handlers will be called with a 'struct pci_dev' parameter
> indicating the CXL Port device requiring handling. The CXL PCIe Port
> device's underlying 'struct device' will match the port device in the
> CXL topology.
> 
> Use the PCIe Port's device object to find the matching CXL Upstream Switch
> Port, CXL Downstream Switch Port, or CXL Root Port in the CXL topology. The
> matching CXL Port device should contain a cached reference to the RAS
> register block. The cached RAS block will be used in handling the error.
> 
> Invoke the existing __cxl_handle_ras() or __cxl_handle_cor_ras() using
> a reference to the RAS registers as a parameter. These functions will use
> the RAS register reference to indicate an error and clear the device's RAS
> status.
> 
> Update __cxl_handle_ras() to return PCI_ERS_RESULT_PANIC in the case
> an error is present in the RAS status. Otherwise, return
> PCI_ERS_RESULT_NONE.

So I have been having this nagging feeling while reviewing this set that
perhaps the CXL error handlers should not be 'struct pci_error_handlers'
relative to a 'struct pci_driver', but instead 'struct
cxl_error_handlers' that are added to 'struct cxl_driver', in particular
'cxl_port_driver'.

See below for what I *think* are insurmountable problems when a PCI
error handler is tasked with looking up @ras_base in a race free manner.
Note I say "think" because I could be misreading or missing some other
circumstance that makes this ok, so do please challenge if you think I
missed something because what follows below is another major direction
change.
  
> Signed-off-by: Terry Bowman <terry.bowman@xxxxxxx>
> ---
>  drivers/cxl/core/pci.c | 81 +++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 77 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
> index af809e7cbe3b..3f13d9dfb610 100644
> --- a/drivers/cxl/core/pci.c
> +++ b/drivers/cxl/core/pci.c
> @@ -699,7 +699,7 @@ static void header_log_copy(void __iomem *ras_base, u32 *log)
>   * Log the state of the RAS status registers and prepare them to log the
>   * next error status. Return 1 if reset needed.
>   */
> -static bool __cxl_handle_ras(struct device *dev, void __iomem *ras_base)
> +static pci_ers_result_t __cxl_handle_ras(struct device *dev, void __iomem *ras_base)
>  {
>  	u32 hl[CXL_HEADERLOG_SIZE_U32];
>  	void __iomem *addr;
> @@ -708,13 +708,13 @@ static bool __cxl_handle_ras(struct device *dev, void __iomem *ras_base)
>  
>  	if (!ras_base) {
>  		dev_warn_once(dev, "CXL RAS register block is not mapped");
> -		return false;
> +		return PCI_ERS_RESULT_NONE;
>  	}
>  
>  	addr = ras_base + CXL_RAS_UNCORRECTABLE_STATUS_OFFSET;
>  	status = readl(addr);
>  	if (!(status & CXL_RAS_UNCORRECTABLE_STATUS_MASK))
> -		return false;
> +		return PCI_ERS_RESULT_NONE;
>  
>  	/* If multiple errors, log header points to first error from ctrl reg */
>  	if (hweight32(status) > 1) {
> @@ -733,7 +733,7 @@ static bool __cxl_handle_ras(struct device *dev, void __iomem *ras_base)
>  
>  	writel(status & CXL_RAS_UNCORRECTABLE_STATUS_MASK, addr);
>  
> -	return true;
> +	return PCI_ERS_RESULT_PANIC;
>  }
>  
>  static bool cxl_handle_endpoint_ras(struct cxl_dev_state *cxlds)
> @@ -782,6 +782,79 @@ static void cxl_disable_rch_root_ints(struct cxl_dport *dport)
>  	writel(aer_cmd, aer_base + PCI_ERR_ROOT_COMMAND);
>  }
>  
> +static int match_uport(struct device *dev, const void *data)
> +{
> +	const struct device *uport_dev = data;
> +	struct cxl_port *port;
> +
> +	if (!is_cxl_port(dev))
> +		return 0;
> +
> +	port = to_cxl_port(dev);
> +
> +	return port->uport_dev == uport_dev;
> +}
> +
> +static void __iomem *cxl_pci_port_ras(struct pci_dev *pdev, struct device **dev)
> +{
> +	void __iomem *ras_base;
> +
> +	if (!pdev || !*dev) {
> +		pr_err("Failed, parameter is NULL");
> +		return NULL;
> +	}
> +
> +	if ((pci_pcie_type(pdev) == PCI_EXP_TYPE_ROOT_PORT) ||
> +	    (pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM)) {
> +		struct cxl_port *port __free(put_cxl_port);
> +		struct cxl_dport *dport = NULL;
> +
> +		port = find_cxl_port(&pdev->dev, &dport);

side comment: please always declare and assign scope-based-cleanup
variables on the same line, i.e.:

        struct cxl_port *port __free(put_cxl_port) =
                find_cxl_port(&pdev->dev, &dport);

Yes, that means violating the coding-style rule of preferring variable
declarations at the top of blocks. This is for 2 reasons:

* The variable is uninitialized. If future refactoring injects an early
  exit then unitialized garbage gets passed to put_cxl_port().

* The cosmetic order of the declaration is not the unwind order. If
  future refactoring introduces other scope-based-cleanup variables it
  requires additional cleanup to move the declaration to satisfy unwind
  dependencies.

> +		if (!port) {
> +			pci_err(pdev, "Failed to find root/dport in CXL topology\n");
> +			return NULL;
> +		}
> +
> +		ras_base = dport ? dport->regs.ras : NULL;
> +		*dev = &port->dev;

Ok, so here is where the trouble I was alluding to earlier begins. At
this point we leave this scope which means @port will have its reference
dropped and may be freed by the time the caller tries to use it.

Additionally, @ras_base is only valid while @port->dev.driver is set. In
this set, cxl_do_recovery() is only holding the device lock of @pdev
which means nothing synchronizes against @ras_base pointing to garbage
because a cxl_port was unbound / unplugged / disabled while error
recovery was running.

Both of those problems go away if upon entry to ->error_detected() it
can already be assumed that the context holds both a 'struct cxl_port'
object reference, and the device_lock() for that object.

As for how to fix it, one idea is to have the AER core post CXL events
to their own fifo for the CXL core to handle. Something like have
aer_isr_one_error(), upon detection of an internal error on a CXL port
device, post the 'struct aer_err_source' to a new kfifo and wake up a
CXL core thread to run cxl_do_recovery() against the CXL port device
topology instead of the PCI device topology.

Essentially, the main point of cxl_do_recovery() is the acknowledgement
that the PCI core does not have the context to judge the severity of
CXL events, or fully annotate events with all the potential kernel
objects impacted by an event. It is also the case that we need a common
landing spot for PCI AER notified CXL error events and ACPI GHES
notified CXL CPER records. So both PCI AER, and CPER notified errors
need to end up in the same cxl_do_recovery() path that walks the CXL
port topology.

The CXL Type-2 series is showing uptake on accelerators registering
'struct cxl_memdev' objects to report their CXL.mem capabilities. I
imagine that effort would eventually end up with a scheme that
accelerators can register a cxl_error_handler instance with that memdev
to get involved in potentially recovering CXL.mem errors. For example,
it may be the case that CXL error isolation finally has a viable use
case when the accelerator knows it is the only device impacted by an
isolation event and can safely reset that entire host-bridge to recover.
That is difficult to achieve in the PCI error handler context.




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux