RE: [EXT] [PATCH net-next V3 3/3] net: axienet: Introduce dmaengine support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Subbaraya Sundeep Bhatta <sbhatta@xxxxxxxxxxx>
> Sent: Wednesday, May 10, 2023 7:14 PM
> To: Gaddam, Sarath Babu Naidu
> <sarath.babu.naidu.gaddam@xxxxxxx>; davem@xxxxxxxxxxxxx;
> edumazet@xxxxxxxxxx; kuba@xxxxxxxxxx; pabeni@xxxxxxxxxx;
> robh+dt@xxxxxxxxxx; krzysztof.kozlowski+dt@xxxxxxxxxx
> Cc: linux@xxxxxxxxxxxxxxx; Simek, Michal <michal.simek@xxxxxxx>;
> Pandey, Radhey Shyam <radhey.shyam.pandey@xxxxxxx>;
> netdev@xxxxxxxxxxxxxxx; devicetree@xxxxxxxxxxxxxxx; linux-arm-
> kernel@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; Sarangi,
> Anirudha <anirudha.sarangi@xxxxxxx>; Katakam, Harini
> <harini.katakam@xxxxxxx>; git (AMD-Xilinx) <git@xxxxxxx>
> Subject: RE: [EXT] [PATCH net-next V3 3/3] net: axienet: Introduce
> dmaengine support
> 
> Hi,
> 
> >-----Original Message-----
> >From: Sarath Babu Naidu Gaddam
> <sarath.babu.naidu.gaddam@xxxxxxx>
> >Sent: Wednesday, May 10, 2023 2:21 PM
> >To: davem@xxxxxxxxxxxxx; edumazet@xxxxxxxxxx; kuba@xxxxxxxxxx;
> >pabeni@xxxxxxxxxx; robh+dt@xxxxxxxxxx;
> >krzysztof.kozlowski+dt@xxxxxxxxxx
> >Cc: linux@xxxxxxxxxxxxxxx; michal.simek@xxxxxxx;
> >radhey.shyam.pandey@xxxxxxx; netdev@xxxxxxxxxxxxxxx;
> >devicetree@xxxxxxxxxxxxxxx; linux-arm-kernel@xxxxxxxxxxxxxxxxxxx;
> >linux- kernel@xxxxxxxxxxxxxxx; anirudha.sarangi@xxxxxxx;
> >harini.katakam@xxxxxxx; sarath.babu.naidu.gaddam@xxxxxxx;
> git@xxxxxxx
> >Subject: [PATCH net-next V3 3/3] net: axienet: Introduce dmaengine
> >support
> >
> >Add dmaengine framework to communicate with the xilinx DMAengine
> >driver(AXIDMA).
> >
> >Axi ethernet driver uses separate channels for transmit and receive.
> >Add support for these channels to handle TX and RX with skb and
> >appropriate callbacks. Also add axi ethernet core interrupt for
> >dmaengine framework support.
> >
> >The dmaengine framework was extended for metadata API support
> during
> >the axidma RFC[1] discussion. However it still needs further
> >enhancements to make it well suited for ethernet usecases. The ethernet
> >features i.e ethtool set/get of DMA IP properties, ndo_poll_controller,
> >trigger reset of DMA IP from ethernet are not supported (mentioned in
> >TODO) and it requires follow-up discussion and dma framework
> enhancement.
> >
> >[1]: https://urldefense.proofpoint.com/v2/url?u=https-
> >3A__lore.kernel.org_lkml_1522665546-2D10035-2D1-2Dgit-2Dsend-
> 2Demail-
> >2D&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=wYboOaw70DU5hR
> M5HDwOR
> >Jx_MfD-
> hXXKii2eobNikgU&m=qvFFGXjULUP3IPFOil5aKySdkumrp8V0TYK4kQ-
> >yzsWPmRolFaQvfKMhaR11_dZv&s=brEWb1Til18VCodb-
> H4tST0HOBXKIJtL-
> >2ztGmMZz_8&e=
> >radheys@xxxxxxxxxx
> >
> >Signed-off-by: Radhey Shyam Pandey
> <radhey.shyam.pandey@xxxxxxxxxx>
> >Signed-off-by: Sarath Babu Naidu Gaddam
> ><sarath.babu.naidu.gaddam@xxxxxxx>
> >---
> >Performance numbers(Mbps):
> >
> >              | TCP | UDP |
> >         -----------------
> >         | Tx | 920 | 800 |
> >         -----------------
> >         | Rx | 620 | 910 |
> >
> >Changes in V3:
> >1) New patch for dmaengine framework support.
> >---
> > drivers/net/ethernet/xilinx/xilinx_axienet.h  |   6 +
> > .../net/ethernet/xilinx/xilinx_axienet_main.c | 331
> +++++++++++++++++-
> > 2 files changed, 335 insertions(+), 2 deletions(-)
> >
> >diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h
> >b/drivers/net/ethernet/xilinx/xilinx_axienet.h
> >index 10917d997d27..fbe00c5390d5 100644
> >--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
> >+++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
> >@@ -436,6 +436,9 @@ struct axidma_bd {
> >  * @coalesce_count_tx:	Store the irq coalesce on TX side.
> >  * @coalesce_usec_tx:	IRQ coalesce delay for TX
> >  * @has_dmas:	flag to check dmaengine framework usage.
> >+ * @tx_chan:	TX DMA channel.
> >+ * @rx_chan:	RX DMA channel.
> >+ * @skb_cache:	Custom skb slab allocator
> >  */
> > struct axienet_local {
> > 	struct net_device *ndev;
> >@@ -501,6 +504,9 @@ struct axienet_local {
> > 	u32 coalesce_count_tx;
> > 	u32 coalesce_usec_tx;
> > 	u8  has_dmas;
> >+	struct dma_chan *tx_chan;
> >+	struct dma_chan *rx_chan;
> >+	struct kmem_cache *skb_cache;
> > };
> >
> > /**
> >diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
> >b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
> >index 8678fc09245a..662c77ff0e99 100644
> >--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
> >+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
> >@@ -37,6 +37,9 @@
> > #include <linux/phy.h>
> > #include <linux/mii.h>
> > #include <linux/ethtool.h>
> >+#include <linux/dmaengine.h>
> >+#include <linux/dma-mapping.h>
> >+#include <linux/dma/xilinx_dma.h>
> >
> > #include "xilinx_axienet.h"
> >
> >@@ -46,6 +49,9 @@
> > #define TX_BD_NUM_MIN			(MAX_SKB_FRAGS + 1)
> > #define TX_BD_NUM_MAX			4096
> > #define RX_BD_NUM_MAX			4096
> >+#define DMA_NUM_APP_WORDS		5
> >+#define LEN_APP				4
> >+#define RX_BUF_NUM_DEFAULT		128
> >
> > /* Must be shorter than length of ethtool_drvinfo.driver field to fit */
> > #define DRIVER_NAME		"xaxienet"
> >@@ -56,6 +62,16 @@
> >
> > #define AXIENET_USE_DMA(lp) ((lp)->has_dmas)
> >
> >+struct axi_skbuff {
> >+	struct scatterlist sgl[MAX_SKB_FRAGS + 1];
> >+	struct dma_async_tx_descriptor *desc;
> >+	dma_addr_t dma_address;
> >+	struct sk_buff *skb;
> >+	int sg_len;
> >+} __packed;
> >+
> >+static int axienet_rx_submit_desc(struct net_device *ndev);
> >+
> > /* Match table for of_platform binding */  static const struct
> >of_device_id axienet_of_match[] = {
> > 	{ .compatible = "xlnx,axi-ethernet-1.00.a", }, @@ -728,6
> +744,108 @@
> >static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
> > 	return 0;
> > }
> >
> >+/**
> >+ * axienet_dma_tx_cb - DMA engine callback for TX channel.
> >+ * @data:       Pointer to the axi_skbuff structure
> >+ * @result:     error reporting through dmaengine_result.
> >+ * This function is called by dmaengine driver for TX channel to
> >+notify
> >+ * that the transmit is done.
> >+ */
> >+static void axienet_dma_tx_cb(void *data, const struct
> >+dmaengine_result
> >*result)
> >+{
> >+	struct axi_skbuff *axi_skb = data;
> >+
> >+	struct net_device *netdev = axi_skb->skb->dev;
> >+	struct axienet_local *lp = netdev_priv(netdev);
> >+
> >+	u64_stats_update_begin(&lp->tx_stat_sync);
> >+	u64_stats_add(&lp->tx_bytes, axi_skb->skb->len);
> >+	u64_stats_add(&lp->tx_packets, 1);
> >+	u64_stats_update_end(&lp->tx_stat_sync);
> >+
> >+	dma_unmap_sg(lp->dev, axi_skb->sgl, axi_skb->sg_len,
> >DMA_MEM_TO_DEV);
> >+	dev_kfree_skb_any(axi_skb->skb);
> >+	kmem_cache_free(lp->skb_cache, axi_skb); }
> >+
> >+/**
> >+ * axienet_start_xmit_dmaengine - Starts the transmission.
> >+ * @skb:        sk_buff pointer that contains data to be Txed.
> >+ * @ndev:       Pointer to net_device structure.
> >+ *
> >+ * Return: NETDEV_TX_OK, on success
> >+ *          NETDEV_TX_BUSY, if any memory failure or SG error.
> >+ *
> >+ * This function is invoked from xmit to initiate transmission. The
> >+ * function sets the skbs , call back API, SG etc.
> >+ * Additionally if checksum offloading is supported,
> >+ * it populates AXI Stream Control fields with appropriate values.
> >+ */
> >+static netdev_tx_t
> >+axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device
> >+*ndev) {
> >+	struct dma_async_tx_descriptor *dma_tx_desc = NULL;
> >+	struct axienet_local *lp = netdev_priv(ndev);
> >+	u32 app[DMA_NUM_APP_WORDS] = {0};
> >+	struct axi_skbuff *axi_skb;
> >+	u32 csum_start_off;
> >+	u32 csum_index_off;
> >+	int sg_len;
> >+	int ret;
> >+
> >+	sg_len = skb_shinfo(skb)->nr_frags + 1;
> >+	axi_skb = kmem_cache_zalloc(lp->skb_cache, GFP_KERNEL);
> >+	if (!axi_skb)
> >+		return NETDEV_TX_BUSY;
> >+
> >+	sg_init_table(axi_skb->sgl, sg_len);
> >+	ret = skb_to_sgvec(skb, axi_skb->sgl, 0, skb->len);
> >+	if (unlikely(ret < 0))
> >+		goto xmit_error_skb_sgvec;
> >+
> >+	ret = dma_map_sg(lp->dev, axi_skb->sgl, sg_len,
> DMA_TO_DEVICE);
> >+	if (ret == 0)
> >+		goto xmit_error_skb_sgvec;
> >+
> >+	/*Fill up app fields for checksum */
> >+	if (skb->ip_summed == CHECKSUM_PARTIAL) {
> >+		if (lp->features & XAE_FEATURE_FULL_TX_CSUM) {
> >+			/* Tx Full Checksum Offload Enabled */
> >+			app[0] |= 2;
> >+		} else if (lp->features &
> XAE_FEATURE_PARTIAL_RX_CSUM) {
> >+			csum_start_off = skb_transport_offset(skb);
> >+			csum_index_off = csum_start_off + skb-
> >csum_offset;
> >+			/* Tx Partial Checksum Offload Enabled */
> >+			app[0] |= 1;
> >+			app[1] = (csum_start_off << 16) |
> csum_index_off;
> >+		}
> >+	} else if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
> >+		app[0] |= 2; /* Tx Full Checksum Offload Enabled */
> >+	}
> >+
> >+	dma_tx_desc = lp->tx_chan->device->device_prep_slave_sg(lp-
> >tx_chan,
> >axi_skb->sgl,
> >+			sg_len, DMA_MEM_TO_DEV,
> >+			DMA_PREP_INTERRUPT, (void *)app);
> >+
> >+	if (!dma_tx_desc)
> >+		goto xmit_error_prep;
> >+
> >+	axi_skb->skb = skb;
> >+	axi_skb->sg_len = sg_len;
> >+	dma_tx_desc->callback_param =  axi_skb;
> >+	dma_tx_desc->callback_result = axienet_dma_tx_cb;
> >+	dmaengine_submit(dma_tx_desc);
> >+	dma_async_issue_pending(lp->tx_chan);
> >+
> >+	return NETDEV_TX_OK;
> >+
> >+xmit_error_prep:
> >+	dma_unmap_sg(lp->dev, axi_skb->sgl, sg_len, DMA_TO_DEVICE);
> >+xmit_error_skb_sgvec:
> >+	kmem_cache_free(lp->skb_cache, axi_skb);
> >+	return NETDEV_TX_BUSY;
> >+}
> >+
> > /**
> >  * axienet_tx_poll - Invoked once a transmit is completed by the
> >  * Axi DMA Tx channel.
> >@@ -912,7 +1030,42 @@ axienet_start_xmit(struct sk_buff *skb, struct
> >net_device *ndev)
> > 	if (!AXIENET_USE_DMA(lp))
> > 		return axienet_start_xmit_legacy(skb, ndev);
> > 	else
> >-		return NETDEV_TX_BUSY;
> >+		return axienet_start_xmit_dmaengine(skb, ndev);
> 
> You can avoid this if else by
>  if (AXIENET_USE_DMA(lp))
>          return axienet_start_xmit_dmaengine(skb, ndev);
> 
> and no need of defining axienet_start_xmit_legacy function in patch 2/3.
> _legacy may not be correct since you support both in-built dma or with
> dma engine just by turning on/off dt properties. Also does this driver roll
> back to using in-built dma if DMA_ENGINE is not compiled in?

We wanted to separate it to support future maintenance or deprecation 
with a separate wrapper. It is only a preference and either approach is ok.

There is no fallback to built in DMA. The intention is to use dmaengine
as the preferred approach and eventually deprecate built-in dma. 
If DMA_ENGINE is not present, the dma channel request fails through 
the usual exit path.

Will address remaining review comments of 2/3 and 3/3 in the next
Version.

Thanks,
Sarath






[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux