[no subject]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Note it's applicable for both Host and End-point case, when Linux is
running on the CPU-side of the eDMA controller. So if it's DW PCIe
end-point, then MEM_TO_DEV means copying data from the local CPU
memory into the remote memory. In general the remote memory can be
either some PCIe device on the bus or the Root Complex' CPU memory,
each of which is some remote device anyway from the Local CPU
perspective.

2) Embedded into the PCIe EP. This case is implemented in the
drivers/dma/dw-edma/dw-edma-pcie.c driver. AFAICS from the commits log
and from the driver code, that device is a Synopsys PCIe EndPoint IP
prototype kit. It is a normal PCIe peripheral device with eDMA
embedded, which CPU/Application interface is connected to some
embedded SRAM while remote (link partner) interface is directed
towards the PCIe bus. At the same time the device is setup and handled
by the code running on a CPU connected to the PCIe Host controller.  I
think that in order to preserve the normal DMA operations semantics we
still need to consider the MEM_TO_DEV/DEV_TO_MEM operations from the
host CPU perspective, since that's the side the DMA controller is
supposed to be setup from.  In this MEM_TO_DEV is supposed to be used
to copy data from the host CPU memory into the remote device memory.
It means to allocate Rx/Read channel on the eDMA controller, so one
would be read data from the Local CPU memory and copied it to the PCIe
device SRAM. The logic of the DEV_TO_MEM direction would be just
flipped. The eDMA PCIe device shall use Tx/Write channel to copy data
from it's SRAM into the Host CPU memory.

Please note as I understand the case 2) describes the Synopsys PCIe
EndPoint IP prototype kit, which is based on some FPGA code. It's just
a test setup with no real application, while the case 1) is a real setup
available on our SoC and I guess on yours.

So what I suggest in the framework of this patch is just to implement
the case 1) only. While the case 2) as it's an artificial one can be
manually handled by the DMA client drivers. BTW There aren't ones available
in the kernel anyway. The only exception is an old-time attempt to get
an eDMA IP test-driver mainlined into the kernel:
https://patchwork.kernel.org/project/linux-pci/patch/cc195ac53839b318764c8f6502002cd6d933a923.1547230339.git.gustavo.pimentel@xxxxxxxxxxxx/
But it was long time ago. So it's unlikely to be accepted at all.

What do you think?

-Sergey

> +		 *
> +		 ****************************************************************/
> +

> +		if ((dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_READ) ||
> +		    (dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_WRITE))
> +			read = true;

Seeing the driver support only two directions DMA_DEV_TO_MEM/DMA_DEV_TO_MEM
and EDMA_DIR_READ/EDMA_DIR_WRITE, this conditional statement seems
redundant.

> +
> +		/* Program the source and destination addresses for DMA read/write */
> +		if (read) {
>  			burst->sar = src_addr;
>  			if (xfer->type == EDMA_XFER_CYCLIC) {
>  				burst->dar = xfer->xfer.cyclic.paddr;
> -- 
> 2.24.0.rc1
> 



[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux PCI]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux