Hello Roger, On Wed, 30 Oct 2024 15:53:58 +0200 Roger Quadros <rogerq@xxxxxxxxxx> wrote: > On J7 platforms, setting up multiple RX flows was failing > as the RX free descriptor ring 0 is shared among all flows > and we did not allocate enough elements in the RX free descriptor > ring 0 to accommodate for all RX flows. > > This issue is not present on AM62 as separate pair of > rings are used for free and completion rings for each flow. > > Fix this by allocating enough elements for RX free descriptor > ring 0. > > However, we can no longer rely on desc_idx (descriptor based > offsets) to identify the pages in the respective flows as > free descriptor ring includes elements for all flows. > To solve this, introduce a new swdata data structure to store > flow_id and page. This can be used to identify which flow (page_pool) > and page the descriptor belonged to when popped out of the > RX rings. [...] > @@ -339,7 +339,7 @@ static int am65_cpsw_nuss_rx_push(struct am65_cpsw_common *common, > struct device *dev = common->dev; > dma_addr_t desc_dma; > dma_addr_t buf_dma; > - void *swdata; > + struct am65_cpsw_swdata *swdata; There's a reverse xmas-tree issue here, where variables should be declared from the longest line to the shortest. [...] > static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma) > { > - struct am65_cpsw_rx_flow *flow = data; > + struct am65_cpsw_rx_chn *rx_chn = data; > struct cppi5_host_desc_t *desc_rx; > - struct am65_cpsw_rx_chn *rx_chn; > + struct am65_cpsw_swdata *swdata; > dma_addr_t buf_dma; > u32 buf_dma_len; > - void *page_addr; > - void **swdata; > - int desc_idx; > + struct page *page; > + u32 flow_id; Here as well [...] > rx_chn->rx_chn = k3_udma_glue_request_rx_chn(dev, "rx", &rx_cfg); > @@ -2455,10 +2441,12 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) > flow = &rx_chn->flows[i]; > flow->id = i; > flow->common = common; > + flow->irq = -EINVAL; I've tried to follow the code and I don't get that assignment for the irq field, does it really have to do with the current change or is it another issue that's being fixed ? Sorry if I missed the point here. Thanks, Maxime