On Fri, Mar 11, 2022 at 10:01:47PM +0300, Serge Semin wrote: > On Fri, Mar 11, 2022 at 11:11:34PM +0530, Manivannan Sadhasivam wrote: > > On Thu, Mar 10, 2022 at 07:31:23PM +0300, Serge Semin wrote: > > > On Wed, Mar 09, 2022 at 03:12:01PM -0600, Frank Li wrote: > > > > From: Manivannan Sadhasivam <manivannan.sadhasivam@xxxxxxxxxx> > > > > > > > > When eDMA is controlled by the Endpoint (EP), the current logic incorrectly > > > > programs the source and destination addresses for read and write. Since the > > > > Root complex and Endpoint uses the opposite channels for read/write, fix the > > > > issue by finding out the read operation first and program the eDMA accordingly. > > > > > > > > Cc: stable@xxxxxxxxxxxxxxx > > > > Fixes: bd96f1b2f43a ("dmaengine: dw-edma: support local dma device transfer semantics") > > > > Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") > > > > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@xxxxxxxxxx> > > > > Signed-off-by: Frank Li <Frank.Li@xxxxxxx> > > > > --- > > > > No change between v1 to v4 > > > > > > > > drivers/dma/dw-edma/dw-edma-core.c | 32 +++++++++++++++++++++++++++++- > > > > 1 file changed, 31 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c > > > > index 66dc650577919..507f08db1aad3 100644 > > > > --- a/drivers/dma/dw-edma/dw-edma-core.c > > > > +++ b/drivers/dma/dw-edma/dw-edma-core.c > > > > @@ -334,6 +334,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) > > > > struct dw_edma_chunk *chunk; > > > > struct dw_edma_burst *burst; > > > > struct dw_edma_desc *desc; > > > > + bool read = false; > > > > u32 cnt = 0; > > > > int i; > > > > > > > > @@ -424,7 +425,36 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) > > > > chunk->ll_region.sz += burst->sz; > > > > desc->alloc_sz += burst->sz; > > > > > > > > - if (chan->dir == EDMA_DIR_WRITE) { > > > > + /**************************************************************** > > > > + * > > > > > > > + * Root Complex Endpoint > > > > + * +-----------------------+ +----------------------+ > > > > + * | | TX CH | | > > > > + * | | | | > > > > + * | DEV_TO_MEM <-------------+ MEM_TO_DEV | > > > > + * | | | | > > > > + * | | | | > > > > + * | MEM_TO_DEV +-------------> DEV_TO_MEM | > > > > + * | | | | > > > > + * | | RX CH | | > > > > + * +-----------------------+ +----------------------+ > > > > + * > > > > + * If eDMA is controlled by the Root complex, TX channel > > > > + * (EDMA_DIR_WRITE) is used for memory read (DEV_TO_MEM) and RX > > > > + * channel (EDMA_DIR_READ) is used for memory write (MEM_TO_DEV). > > > > + * > > > > + * If eDMA is controlled by the endpoint, RX channel > > > > + * (EDMA_DIR_READ) is used for memory read (DEV_TO_MEM) and TX > > > > + * channel (EDMA_DIR_WRITE) is used for memory write (MEM_TO_DEV). > > > > > > Either I have some wrong notion about this issue, or something wrong > > > with the explanation above and with this fix below. > > > > > > From my understanding of the possible DW eDMA IP-core setups the > > > scatch above and the text below it are incorrect. Here is the way the > > > DW eDMA can be used: > > > 1) Embedded into the DW PCIe Host/EP controller. In this case > > > CPU/Application Memory is the memory of the CPU attached to the > > > host/EP controller, while the remote (link partner) memory is the PCIe > > > bus memory. In this case MEM_TO_DEV operation is supposed to be > > > performed by the Tx/Write channels, while the DEV_TO_MEM operation - > > > by the Rx/Read channels. > > > > > > > > I'm not aware or even not sure about the use of eDMA in the PCIe host. > > If that's the case, how the endpoint can access it from remote perspective? > > Do you have a usecase or an example where used or even documented? > > I am aware. I've got SoC with DW PCIe Host v4.60/v4.70 and eDMA > enabled for each of them. I also poses several manuals of the DW PCIe > Host and End-points of various versions. Both Host and End-points can > have eDMA enabled. But it's possible to have the eDMA accessed via the > PCIe wire only for the End-points and only if the IP-core is > accordingly synthesized. Other than that the eDMA is configurable from > the Local CPU only. > Interesting! > > > > > Note it's applicable for both Host and End-point case, when Linux is > > > running on the CPU-side of the eDMA controller. So if it's DW PCIe > > > end-point, then MEM_TO_DEV means copying data from the local CPU > > > memory into the remote memory. In general the remote memory can be > > > either some PCIe device on the bus or the Root Complex' CPU memory, > > > each of which is some remote device anyway from the Local CPU > > > perspective. > > > > > > 2) Embedded into the PCIe EP. This case is implemented in the > > > drivers/dma/dw-edma/dw-edma-pcie.c driver. AFAICS from the commits log > > > and from the driver code, that device is a Synopsys PCIe EndPoint IP > > > prototype kit. It is a normal PCIe peripheral device with eDMA > > > embedded, which CPU/Application interface is connected to some > > > embedded SRAM while remote (link partner) interface is directed > > > towards the PCIe bus. At the same time the device is setup and handled > > > by the code running on a CPU connected to the PCIe Host controller. I > > > think that in order to preserve the normal DMA operations semantics we > > > still need to consider the MEM_TO_DEV/DEV_TO_MEM operations from the > > > host CPU perspective, since that's the side the DMA controller is > > > supposed to be setup from. In this MEM_TO_DEV is supposed to be used > > > to copy data from the host CPU memory into the remote device memory. > > > It means to allocate Rx/Read channel on the eDMA controller, so one > > > would be read data from the Local CPU memory and copied it to the PCIe > > > device SRAM. The logic of the DEV_TO_MEM direction would be just > > > flipped. The eDMA PCIe device shall use Tx/Write channel to copy data > > > from it's SRAM into the Host CPU memory. > > > > > > Please note as I understand the case 2) describes the Synopsys PCIe > > > EndPoint IP prototype kit, which is based on some FPGA code. It's just > > > a test setup with no real application, while the case 1) is a real setup > > > available on our SoC and I guess on yours. > > > > > > So what I suggest in the framework of this patch is just to implement > > > the case 1) only. While the case 2) as it's an artificial one can be > > > manually handled by the DMA client drivers. BTW There aren't ones available > > > in the kernel anyway. The only exception is an old-time attempt to get > > > an eDMA IP test-driver mainlined into the kernel: > > > https://patchwork.kernel.org/project/linux-pci/patch/cc195ac53839b318764c8f6502002cd6d933a923.1547230339.git.gustavo.pimentel@xxxxxxxxxxxx/ > > > But it was long time ago. So it's unlikely to be accepted at all. > > > > > > What do you think? > > > > > > > > As per my understanding, the eDMA is solely used in the PCIe endpoint. And the > > access to it happens over PCIe bus or by the local CPU. > > Not fully correct. Root Ports can also have eDMA embedded. In that > case the eDMA can be only accessible from the local CPU. At the same > time the DW PCIe End-point case is the IP-core synthesize parameters > specific. It's always possible to access the eDMA CSRs from local > CPU, but a particular End-point BAR can be pre-synthesize to map > either Port Logic, or eDMA or iATU CSRs. Thus a PCIe root port can > perform a full End-point configuration. Anyway the case if the eDMA > functionality being accessible over the PCIe wire doesn't really make > much sense with no info regarding the application logic hidden behind > the PCIe End-point interface since SAR/DAR LLP is supposed to be > initialized with an address from the local (application) memory space. > Thanks for the explanation, it clarifies my doubt. I got misleaded by the earlier commits... > So AFAICS the main usecase of the controller is 1) - when eDMA is a > part of the Root Port/End-point and only local CPU is supposed to have > it accessed and configured. > > I can resend this patch with my fix to the problem. What do you think? > Yes, please do. Thanks, Mani > -Sergey > > > > > The commit from Alan Mikhak is what I took as a reference since the patch was > > already merged: > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/dma/dw-edma?id=bd96f1b2f43a39310cc576bb4faf2ea24317a4c9 > > > > Thanks, > > Mani > > > > > -Sergey > > > > > > > + * > > > > + ****************************************************************/ > > > > + > > > > > > > + if ((dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_READ) || > > > > + (dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_WRITE)) > > > > + read = true; > > > > > > Seeing the driver support only two directions DMA_DEV_TO_MEM/DMA_DEV_TO_MEM > > > and EDMA_DIR_READ/EDMA_DIR_WRITE, this conditional statement seems > > > redundant. > > > > > > > + > > > > + /* Program the source and destination addresses for DMA read/write */ > > > > + if (read) { > > > > burst->sar = src_addr; > > > > if (xfer->type == EDMA_XFER_CYCLIC) { > > > > burst->dar = xfer->xfer.cyclic.paddr; > > > > -- > > > > 2.24.0.rc1 > > > >