RE: [PATCH v7 3/3] PCI: amd-mdb: Add AMD MDB Root Port driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sadhasivam,

> -----Original Message-----
> From: Havalige, Thippeswamy
> Sent: Tuesday, January 28, 2025 2:52 PM
> To: Manivannan Sadhasivam <manivannan.sadhasivam@xxxxxxxxxx>
> Cc: bhelgaas@xxxxxxxxxx; lpieralisi@xxxxxxxxxx; kw@xxxxxxxxx; robh@xxxxxxxxxx;
> krzk+dt@xxxxxxxxxx; conor+dt@xxxxxxxxxx; linux-pci@xxxxxxxxxxxxxxx;
> devicetree@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; jingoohan1@xxxxxxxxx;
> Simek, Michal <michal.simek@xxxxxxx>; Gogada, Bharat Kumar
> <bharat.kumar.gogada@xxxxxxx>
> Subject: RE: [PATCH v7 3/3] PCI: amd-mdb: Add AMD MDB Root Port driver
> 
> Hi Sadhasivam,
> 
> > -----Original Message-----
> > From: Manivannan Sadhasivam <manivannan.sadhasivam@xxxxxxxxxx>
> > Sent: Tuesday, January 21, 2025 10:18 AM
> > To: Havalige, Thippeswamy <thippeswamy.havalige@xxxxxxx>
> > Cc: bhelgaas@xxxxxxxxxx; lpieralisi@xxxxxxxxxx; kw@xxxxxxxxx;
> robh@xxxxxxxxxx;
> > krzk+dt@xxxxxxxxxx; conor+dt@xxxxxxxxxx; linux-pci@xxxxxxxxxxxxxxx;
> > devicetree@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> jingoohan1@xxxxxxxxx;
> > Simek, Michal <michal.simek@xxxxxxx>; Gogada, Bharat Kumar
> > <bharat.kumar.gogada@xxxxxxx>
> > Subject: Re: [PATCH v7 3/3] PCI: amd-mdb: Add AMD MDB Root Port driver
> >
> > On Mon, Jan 20, 2025 at 04:13:05AM +0530, Thippeswamy Havalige wrote:
> > > Add support for AMD MDB (Multimedia DMA Bridge) IP core as Root Port.
> > >
> > > The Versal2 devices include MDB Module. The integrated block for MDB
> > > along with the integrated bridge can function as PCIe Root Port
> > > controller at
> > > Gen5 32-Gb/s operation per lane.
> > >
> > > Bridge supports error and legacy interrupts and are handled using
> > > platform specific interrupt line in Versal2.
> > >
> > > Signed-off-by: Thippeswamy Havalige <thippeswamy.havalige@xxxxxxx>
> > > ---
> > > changes in v2:
> > > -------------
> > > - Update Gen5 speed in the patch description.
> > > - Modify Kconfig file.
> > > - Update string _leg_ to intx.
> > > - Get platform structure through automic variables.
> > > - Remove _rp_ in function.
> > > Changes in v3:
> > > --------------
> > > -None.
> > > Changes in v4:
> > > --------------
> > > -None.
> > > Changes in v5:
> > > --------------
> > > -None.
> > > Changes in v6:
> > > --------------
> > > - Remove pdev automatic variable.
> > > - Update register name to slcr.
> > > - Fix whitespace.
> > > - remove Spurious extra line.
> > > - Update Legacy to INTx.
> > > - Add space before (SLCR).
> > > - Update menuconfig description.
> > > ---
> > >  drivers/pci/controller/dwc/Kconfig        |  11 +
> > >  drivers/pci/controller/dwc/Makefile       |   1 +
> > >  drivers/pci/controller/dwc/pcie-amd-mdb.c | 436
> > > ++++++++++++++++++++++
> > >  3 files changed, 448 insertions(+)
> > >  create mode 100644 drivers/pci/controller/dwc/pcie-amd-mdb.c
> > >
> > > diff --git a/drivers/pci/controller/dwc/Kconfig
> > > b/drivers/pci/controller/dwc/Kconfig
> > > index b6d6778b0698..61d119646749 100644
> > > --- a/drivers/pci/controller/dwc/Kconfig
> > > +++ b/drivers/pci/controller/dwc/Kconfig
> > > @@ -27,6 +27,17 @@ config PCIE_AL
> > >  	  required only for DT-based platforms. ACPI platforms with the
> > >  	  Annapurna Labs PCIe controller don't need to enable this.
> > >
> > > +config PCIE_AMD_MDB
> > > +	bool "AMD MDB Versal2 PCIe Host controller"
> > > +	depends on OF || COMPILE_TEST
> > > +	depends on PCI && PCI_MSI
> > > +	select PCIE_DW_HOST
> > > +	help
> > > +	  Say Y here if you want to enable PCIe controller support on AMD
> > > +	  Versal2 SoCs. The AMD MDB Versal2 PCIe controller is based on
> > DesignWare
> > > +	  IP and therefore the driver re-uses the Designware core functions to
> > > +	  implement the driver.
> > > +
> > >  config PCI_MESON
> > >  	tristate "Amlogic Meson PCIe controller"
> > >  	default m if ARCH_MESON
> > > diff --git a/drivers/pci/controller/dwc/Makefile
> > > b/drivers/pci/controller/dwc/Makefile
> > > index a8308d9ea986..ae27eda6ec5e 100644
> > > --- a/drivers/pci/controller/dwc/Makefile
> > > +++ b/drivers/pci/controller/dwc/Makefile
> > > @@ -3,6 +3,7 @@ obj-$(CONFIG_PCIE_DW) += pcie-designware.o
> > >  obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
> > >  obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
> > >  obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
> > > +obj-$(CONFIG_PCIE_AMD_MDB) += pcie-amd-mdb.o
> > >  obj-$(CONFIG_PCIE_BT1) += pcie-bt1.o
> > >  obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
> > >  obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o diff --git
> > > a/drivers/pci/controller/dwc/pcie-amd-mdb.c
> > > b/drivers/pci/controller/dwc/pcie-amd-mdb.c
> > > new file mode 100644
> > > index 000000000000..ef1ac757d41c
> > > --- /dev/null
> > > +++ b/drivers/pci/controller/dwc/pcie-amd-mdb.c
> > > @@ -0,0 +1,436 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * PCIe host controller driver for AMD MDB PCIe Bridge
> > > + *
> > > + * Copyright (C) 2024-2025, Advanced Micro Devices, Inc.
> > > + */
> > > +
> > > +#include <linux/clk.h>
> > > +#include <linux/delay.h>
> > > +#include <linux/gpio.h>
> > > +#include <linux/interrupt.h>
> > > +#include <linux/irqdomain.h>
> > > +#include <linux/kernel.h>
> > > +#include <linux/init.h>
> > > +#include <linux/of_device.h>
> > > +#include <linux/pci.h>
> > > +#include <linux/platform_device.h>
> > > +#include <linux/resource.h>
> > > +#include <linux/types.h>
> > > +
> > > +#include "pcie-designware.h"
> > > +
> > > +#define AMD_MDB_TLP_IR_STATUS_MISC		0x4C0
> > > +#define AMD_MDB_TLP_IR_MASK_MISC		0x4C4
> > > +#define AMD_MDB_TLP_IR_ENABLE_MISC		0x4C8
> > > +
> > > +#define AMD_MDB_PCIE_IDRN_SHIFT			16
> > > +
> > > +/* Interrupt registers definitions */
> > > +#define AMD_MDB_PCIE_INTR_CMPL_TIMEOUT		15
> > > +#define AMD_MDB_PCIE_INTR_PM_PME_RCVD		24
> > > +#define AMD_MDB_PCIE_INTR_PME_TO_ACK_RCVD	25
> > > +#define AMD_MDB_PCIE_INTR_MISC_CORRECTABLE	26
> > > +#define AMD_MDB_PCIE_INTR_NONFATAL		27
> > > +#define AMD_MDB_PCIE_INTR_FATAL			28
> > > +
> > > +#define IMR(x) BIT(AMD_MDB_PCIE_INTR_ ##x)
> > > +#define AMD_MDB_PCIE_IMR_ALL_MASK			\
> > > +	(						\
> > > +		IMR(CMPL_TIMEOUT)	|		\
> > > +		IMR(PM_PME_RCVD)	|		\
> > > +		IMR(PME_TO_ACK_RCVD)	|		\
> > > +		IMR(MISC_CORRECTABLE)	|		\
> > > +		IMR(NONFATAL)		|		\
> > > +		IMR(FATAL)				\
> > > +	)
> > > +
> > > +/**
> > > + * struct amd_mdb_pcie - PCIe port information
> > > + * @pci: DesignWare PCIe controller structure
> > > + * @slcr: MDB System Level Control and Status Register (SLCR) Base
> > > + * @intx_domain: INTx IRQ domain pointer
> > > + * @mdb_domain: MDB IRQ domain pointer  */ struct amd_mdb_pcie {
> > > +	struct dw_pcie			pci;
> > > +	void __iomem			*slcr;
> > > +	struct irq_domain		*intx_domain;
> > > +	struct irq_domain		*mdb_domain;
> > > +};
> > > +
> > > +static const struct dw_pcie_host_ops amd_mdb_pcie_host_ops = { };
> > > +
> > > +static inline u32 pcie_read(struct amd_mdb_pcie *pcie, u32 reg)
> >
> > No need of 'inline' keyword in .c files. Compiler will inline the function on its own
> > when required.
> >
> > But I'd prefer to use the {readl/writel}_relaxed directly instead of a wrapper.
> - Thank you for reviewing, Will remove inline keyword & like to keep wrapper
> function.
> >
> > > +{
> > > +	return readl_relaxed(pcie->slcr + reg); }
> > > +
> > > +static inline void pcie_write(struct amd_mdb_pcie *pcie,
> > > +			      u32 val, u32 reg)
> > > +{
> > > +	writel_relaxed(val, pcie->slcr + reg); }
> > > +
> > > +static void amd_mdb_mask_intx_irq(struct irq_data *data) {
> > > +	struct amd_mdb_pcie *pcie = irq_data_get_irq_chip_data(data);
> > > +	struct dw_pcie *pci = &pcie->pci;
> > > +	struct dw_pcie_rp *port = &pci->pp;
> > > +	unsigned long flags;
> > > +	u32 mask, val;
> > > +
> > > +	mask = BIT(data->hwirq + AMD_MDB_PCIE_IDRN_SHIFT);
> > > +	raw_spin_lock_irqsave(&port->lock, flags);
> > > +
> >
> > Remove the newline here and add it before raw_spin_lock_irqsave().
> - Thank you for review comments, Will remove this piece of call back function.
- Will take your review comments and keep this callback function.
> >
> > > +	val = pcie_read(pcie, AMD_MDB_TLP_IR_STATUS_MISC);
> > > +	pcie_write(pcie, (val & (~mask)), AMD_MDB_TLP_IR_STATUS_MISC);
> > > +
> >
> > Remove the newline here.
> - Thank you for review comments, Will remove this piece of call back function.
- Will take your review comments and keep this callback function.
> >
> > > +	raw_spin_unlock_irqrestore(&port->lock, flags); }
> > > +
> > > +static void amd_mdb_unmask_intx_irq(struct irq_data *data) {
> > > +	struct amd_mdb_pcie *pcie = irq_data_get_irq_chip_data(data);
> > > +	struct dw_pcie *pci = &pcie->pci;
> > > +	struct dw_pcie_rp *port = &pci->pp;
> > > +	unsigned long flags;
> > > +	u32 mask;
> > > +	u32 val;
> > > +
> > > +	mask = BIT(data->hwirq + AMD_MDB_PCIE_IDRN_SHIFT);
> > > +	raw_spin_lock_irqsave(&port->lock, flags);
> > > +
> >
> > Same as above.
> >
> > > +	val = pcie_read(pcie, AMD_MDB_TLP_IR_STATUS_MISC);
> > > +	pcie_write(pcie, (val | mask), AMD_MDB_TLP_IR_STATUS_MISC);
> > > +
> > > +	raw_spin_unlock_irqrestore(&port->lock, flags); }
> > > +
> > > +static struct irq_chip amd_mdb_intx_irq_chip = {
> > > +	.name		= "INTx",
> >
> > Add the 'AMD MDB' prefix.
> - Thank you for review comments, Will remove this piece of call back function.
> >
> > > +	.irq_mask	= amd_mdb_mask_intx_irq,
> > > +	.irq_unmask	= amd_mdb_unmask_intx_irq,
> > > +};
> > > +
> > > +/**
> > > + * amd_mdb_pcie_intx_map - Set the handler for the INTx and mark IRQ
> > > + * as valid
> > > + * @domain: IRQ domain
> > > + * @irq: Virtual IRQ number
> > > + * @hwirq: HW interrupt number
> > > + *
> > > + * Return: Always returns 0.
> > > + */
> > > +static int amd_mdb_pcie_intx_map(struct irq_domain *domain,
> > > +				 unsigned int irq, irq_hw_number_t hwirq) {
> > > +	irq_set_chip_and_handler(irq, &amd_mdb_intx_irq_chip,
> > > +				 handle_level_irq);
> > > +	irq_set_chip_data(irq, domain->host_data);
> > > +	irq_set_status_flags(irq, IRQ_LEVEL);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +/* INTx IRQ Domain operations */
> > > +static const struct irq_domain_ops amd_intx_domain_ops = {
> > > +	.map = amd_mdb_pcie_intx_map,
> > > +};
> > > +
> > > +/**
> > > + * amd_mdb_pcie_init_port - Initialize hardware
> > > + * @pcie: PCIe port information
> > > + */
> >
> > This kernel doc doesn't add any value. So you can drop it.
> - Thank you for review comments, Will remove this kernel doc in next patch.
> >
> > > +static int amd_mdb_pcie_init_port(struct amd_mdb_pcie *pcie) {
> > > +	int val;
> > > +
> > > +	/* Disable all TLP Interrupts */
> > > +	pcie_write(pcie, pcie_read(pcie, AMD_MDB_TLP_IR_ENABLE_MISC) &
> > > +		   ~AMD_MDB_PCIE_IMR_ALL_MASK,
> > > +		   AMD_MDB_TLP_IR_ENABLE_MISC);
> > > +
> > > +	/* Clear pending TLP interrupts */
> > > +	pcie_write(pcie, pcie_read(pcie, AMD_MDB_TLP_IR_STATUS_MISC) &
> > > +		   AMD_MDB_PCIE_IMR_ALL_MASK,
> > > +		   AMD_MDB_TLP_IR_STATUS_MISC);
> > > +
> > > +	/* Enable all TLP Interrupts */
> > > +	val = pcie_read(pcie, AMD_MDB_TLP_IR_ENABLE_MISC);
> > > +	pcie_write(pcie, (val | AMD_MDB_PCIE_IMR_ALL_MASK),
> > > +		   AMD_MDB_TLP_IR_ENABLE_MISC);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static irqreturn_t amd_mdb_pcie_event_flow(int irq, void *args) {
> > > +	struct amd_mdb_pcie *pcie = args;
> > > +	unsigned long val;
> > > +	int i;
> > > +
> > > +	val = pcie_read(pcie, AMD_MDB_TLP_IR_STATUS_MISC);
> >
> > Don't you need port lock here? What if the IRQ handler gets executed in a different
> > CPU?
> - Thank you for reviewing. Since we are using a single IRQ line, the IRQ line
> remains in a disabled (masked) state until
> the interrupts are handled. Therefore, there is no need for a locking mechanism in
> this case.
> > > +	val &= ~pcie_read(pcie, AMD_MDB_TLP_IR_MASK_MISC);
> > > +	for_each_set_bit(i, &val, 32)
> > > +		generic_handle_domain_irq(pcie->mdb_domain, i);
> > > +	pcie_write(pcie, val, AMD_MDB_TLP_IR_STATUS_MISC);
> >
> > Clear the IRQs before handling them.
> - Thank you for reviewing, This interrupt handling patch is authored by Marc zingier.
> This is intentionally done this way so that IRQ's are masked before they are handled.
> >
> > > +
> > > +	return IRQ_HANDLED;
> > > +}
> > > +
> > > +#define _IC(x, s)[AMD_MDB_PCIE_INTR_ ## x] = { __stringify(x), s }
> > > +
> > > +static const struct {
> > > +	const char	*sym;
> > > +	const char	*str;
> > > +} intr_cause[32] = {
> > > +	_IC(CMPL_TIMEOUT,	"completion timeout"),
> > > +	_IC(PM_PME_RCVD,	"PM_PME message received"),
> > > +	_IC(PME_TO_ACK_RCVD,	"PME_TO_ACK message received"),
> > > +	_IC(MISC_CORRECTABLE,	"Correctable error message"),
> > > +	_IC(NONFATAL,		"Non fatal error message"),
> > > +	_IC(FATAL,		"Fatal error message"),
> > > +};
> > > +
> > > +static void amd_mdb_mask_event_irq(struct irq_data *d) {
> > > +	struct amd_mdb_pcie *pcie = irq_data_get_irq_chip_data(d);
> > > +	struct dw_pcie *pci = &pcie->pci;
> > > +	struct dw_pcie_rp *port = &pci->pp;
> > > +	u32 val;
> > > +
> > > +	raw_spin_lock(&port->lock);
> >
> > Why this lock is non-irq variant compared to amd_mdb_mask_intx_irq()?
> >
> > > +	val = pcie_read(pcie, AMD_MDB_TLP_IR_STATUS_MISC);
> - Here I need to mask the interrupts before handling.
> Refer https://gitenterprise.xilinx.com/Linux/linux-xlnx/blob/xlnx_rebase_v6.1-
> next/drivers/pci/controller/pcie-xilinx-cpm.c#L255
> > > +	val &= ~BIT(d->hwirq);
> > > +	pcie_write(pcie, val, AMD_MDB_TLP_IR_STATUS_MISC);
> > > +	raw_spin_unlock(&port->lock);
> > > +}
> > > +
> > > +static void amd_mdb_unmask_event_irq(struct irq_data *d) {
> > > +	struct amd_mdb_pcie *pcie = irq_data_get_irq_chip_data(d);
> > > +	struct dw_pcie *pci = &pcie->pci;
> > > +	struct dw_pcie_rp *port = &pci->pp;
> > > +	u32 val;
> > > +
> > > +	raw_spin_lock(&port->lock);
> >
> > Same here.
> >
> > > +	val = pcie_read(pcie, AMD_MDB_TLP_IR_STATUS_MISC);
> > > +	val |= BIT(d->hwirq);
> > > +	pcie_write(pcie, val, AMD_MDB_TLP_IR_STATUS_MISC);
> > > +	raw_spin_unlock(&port->lock);
> > > +}
> > > +
> > > +static struct irq_chip amd_mdb_event_irq_chip = {
> > > +	.name		= "RC-Event",
> >
> > Add the 'AMD MDB' prefix.
> - Thank you for reviewing, will add this in next patch.
> >
> > > +	.irq_mask	= amd_mdb_mask_event_irq,
> > > +	.irq_unmask	= amd_mdb_unmask_event_irq,
> > > +};
> > > +
> > > +static int amd_mdb_pcie_event_map(struct irq_domain *domain,
> > > +				  unsigned int irq, irq_hw_number_t hwirq) {
> > > +	irq_set_chip_and_handler(irq, &amd_mdb_event_irq_chip,
> > > +				 handle_level_irq);
> > > +	irq_set_chip_data(irq, domain->host_data);
> > > +	irq_set_status_flags(irq, IRQ_LEVEL);
> >
> > newline here.
> - Thank you for reviewing, will update this in next patch.
> >
> > > +	return 0;
> > > +}
> > > +
> > > +static const struct irq_domain_ops event_domain_ops = {
> > > +	.map = amd_mdb_pcie_event_map,
> > > +};
> > > +
> > > +static void amd_mdb_pcie_free_irq_domains(struct amd_mdb_pcie *pcie)
> > > +{
> > > +	if (pcie->intx_domain) {
> > > +		irq_domain_remove(pcie->intx_domain);
> > > +		pcie->intx_domain = NULL;
> > > +	}
> > > +
> > > +	if (pcie->mdb_domain) {
> > > +		irq_domain_remove(pcie->mdb_domain);
> > > +		pcie->mdb_domain = NULL;
> > > +	}
> > > +}
> > > +
> > > +/**
> > > + * amd_mdb_pcie_init_irq_domains - Initialize IRQ domain
> > > + * @pcie: PCIe port information
> > > + * @pdev: platform device
> > > + * Return: '0' on success and error value on failure  */ static int
> > > +amd_mdb_pcie_init_irq_domains(struct amd_mdb_pcie *pcie,
> > > +					 struct platform_device *pdev)
> > > +{
> > > +	struct dw_pcie *pci = &pcie->pci;
> > > +	struct dw_pcie_rp *pp = &pci->pp;
> > > +	struct device *dev = &pdev->dev;
> > > +	struct device_node *node = dev->of_node;
> > > +	struct device_node *pcie_intc_node;
> > > +
> > > +	/* Setup INTx */
> > > +	pcie_intc_node = of_get_next_child(node, NULL);
> > > +	if (!pcie_intc_node) {
> > > +		dev_err(dev, "No PCIe Intc node found\n");
> > > +		return -EINVAL;
> >
> > -ENODATA
> >
> > > +	}
> > > +
> > > +	pcie->mdb_domain = irq_domain_add_linear(pcie_intc_node, 32,
> > > +						 &event_domain_ops, pcie);
> > > +	if (!pcie->mdb_domain)
> > > +		goto out;
> > > +
> > > +	irq_domain_update_bus_token(pcie->mdb_domain,
> > DOMAIN_BUS_NEXUS);
> > > +
> > > +	pcie->intx_domain = irq_domain_add_linear(pcie_intc_node,
> > PCI_NUM_INTX,
> > > +						  &amd_intx_domain_ops, pcie);
> > > +	if (!pcie->intx_domain)
> > > +		goto mdb_out;
> > > +
> > > +	irq_domain_update_bus_token(pcie->intx_domain,
> > DOMAIN_BUS_WIRED);
> > > +
> > > +	of_node_put(pcie_intc_node);
> >
> > Move this before irq_domain_update_bus_token().
> >
> > > +	raw_spin_lock_init(&pp->lock);
> > > +
> > > +	return 0;
> > > +mdb_out:
> > > +	amd_mdb_pcie_free_irq_domains(pcie);
> > > +out:
> > > +	of_node_put(pcie_intc_node);
> > > +	dev_err(dev, "Failed to allocate IRQ domains\n");
> >
> > It is not a good practice to print error logs and return a fixed error no in goto labels.
> If
> > one more condition gets added to this function later, these all should be moved to
> > the respective error check. So please do so now itself even if it adds a bit of
> > redundancy.
> >
> > The error log could be improved by stating which domain failed.
> Thank you for reviewing, Will update this in next patch.
> >
> > > +
> > > +	return -ENOMEM;
> > > +}
> > > +
> > > +static irqreturn_t amd_mdb_pcie_intr_handler(int irq, void *args) {
> > > +	struct amd_mdb_pcie *pcie = args;
> > > +	struct device *dev;
> > > +	struct irq_data *d;
> > > +
> > > +	dev = pcie->pci.dev;
> > > +
> > > +	d = irq_domain_get_irq_data(pcie->mdb_domain, irq);
> > > +	if (intr_cause[d->hwirq].str)
> > > +		dev_warn(dev, "%s\n", intr_cause[d->hwirq].str);
> >
> > Move this to dev_dbg().
> - Thank you for the feedback. This message is intended as a warning for the end
> user to indicate that an event has occurred,
> and not just for debugging purposes. Hence, we believe dev_warn() is the
> appropriate log level for this scenario.
> Please let us know if you have further concerns.
> > > +	else
> > > +		dev_warn(dev, "Unknown IRQ %ld\n", d->hwirq);
> >
> > dev_WARN_ONCE() to avoid spamming the log.
> - Thank you reviewing, I ll change it in next patch.
> >
> > > +
> > > +	return IRQ_HANDLED;
> > > +}
> > > +
> > > +static int amd_mdb_setup_irq(struct amd_mdb_pcie *pcie,
> > > +			     struct platform_device *pdev) {
> > > +	struct dw_pcie *pci = &pcie->pci;
> > > +	struct dw_pcie_rp *pp = &pci->pp;
> > > +	struct device *dev = &pdev->dev;
> > > +	int i, irq, err;
> > > +
> > > +	pp->irq = platform_get_irq(pdev, 0);
> > > +	if (pp->irq < 0)
> > > +		return pp->irq;
> > > +
> > > +	for (i = 0; i < ARRAY_SIZE(intr_cause); i++) {
> > > +		if (!intr_cause[i].str)
> > > +			continue;
> > > +		irq = irq_create_mapping(pcie->mdb_domain, i);
> > > +		if (!irq) {
> > > +			dev_err(dev, "Failed to map mdb domain interrupt\n");
> > > +			return -ENXIO;
> >
> > -ENOMEM
> >
> > > +		}
> > > +		err = devm_request_irq(dev, irq, amd_mdb_pcie_intr_handler,
> > > +				       IRQF_SHARED | IRQF_NO_THREAD,
> > > +				       intr_cause[i].sym, pcie);
> >
> > IRQ name should have the 'amd_mdb' prefix and also the domain number to
> > uniquely identify the interrupt belonging to each host bridge.
> - Thank you for the feedback. We will update the IRQ name to include the amd_mdb
> prefix in the next patch. However, including the domain number may not be
> necessary
> in this case since both controllers are not expected to be present simultaneously.
> > > +		if (err) {
> > > +			dev_err(dev, "Failed to request IRQ %d\n", irq);
> > > +			return err;
> > > +		}
> > > +	}
> > > +
> > > +	/* Plug the main event chained handler */
> > > +	err = devm_request_irq(dev, pp->irq, amd_mdb_pcie_event_flow,
> > > +			       IRQF_SHARED | IRQF_NO_THREAD, "pcie_irq", pcie);
> >
> > Same as above.
> - Thank you for feedback, will update in next patch.
> >
> > > +	if (err) {
> > > +		dev_err(dev, "Failed to request event IRQ %d\n", pp->irq);
> > > +		return err;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int amd_mdb_add_pcie_port(struct amd_mdb_pcie *pcie,
> > > +				 struct platform_device *pdev)
> > > +{
> > > +	struct dw_pcie *pci = &pcie->pci;
> > > +	struct dw_pcie_rp *pp = &pci->pp;
> > > +	struct device *dev = &pdev->dev;
> > > +	int ret;
> > > +
> > > +	pcie->slcr = devm_platform_ioremap_resource_byname(pdev, "slcr");
> > > +	if (IS_ERR(pcie->slcr))
> > > +		return PTR_ERR(pcie->slcr);
> > > +
> > > +	ret = amd_mdb_pcie_init_irq_domains(pcie, pdev);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	amd_mdb_pcie_init_port(pcie);
> > > +
> > > +	ret = amd_mdb_setup_irq(pcie, pdev);
> > > +	if (ret) {
> > > +		dev_err(dev, "Failed to set up interrupts\n");
> > > +		goto out;
> > > +	}
> > > +
> > > +	pp->ops = &amd_mdb_pcie_host_ops;
> > > +
> > > +	ret = dw_pcie_host_init(pp);
> > > +	if (ret) {
> > > +		dev_err(dev, "Failed to initialize host\n");
> > > +		goto out;
> > > +	}
> > > +
> > > +	return 0;
> > > +
> > > +out:
> > > +	amd_mdb_pcie_free_irq_domains(pcie);
> > > +	return ret;
> > > +}
> > > +
> > > +static int amd_mdb_pcie_probe(struct platform_device *pdev) {
> > > +	struct device *dev = &pdev->dev;
> > > +	struct amd_mdb_pcie *pcie;
> > > +	struct dw_pcie *pci;
> > > +
> > > +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
> > > +	if (!pcie)
> > > +		return -ENOMEM;
> > > +
> > > +	pci = &pcie->pci;
> > > +	pci->dev = dev;
> > > +
> > > +	platform_set_drvdata(pdev, pcie);
> > > +
> > > +	return amd_mdb_add_pcie_port(pcie, pdev); }
> > > +
> > > +static const struct of_device_id amd_mdb_pcie_of_match[] = {
> > > +	{
> > > +		.compatible = "amd,versal2-mdb-host",
> > > +	},
> > > +	{},
> > > +};
> > > +
> > > +static struct platform_driver amd_mdb_pcie_driver = {
> > > +	.driver = {
> > > +		.name	= "amd-mdb-pcie",
> > > +		.of_match_table = amd_mdb_pcie_of_match,
> > > +		.suppress_bind_attrs = true,
> >
> > Consider using PROBE_PREFER_ASYNCHRONOUS to speed up the probe.
> 
> - Thank you for the suggestion. However, in future probe function involves handling
> the PCIe PERST# signal,
> which may require synchronous execution to ensure proper initialization of the
> device and its dependencies.
> >
> > - Mani
> >
> > --
> > மணிவண்ணன் சதாசிவம்




[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux