Lorenzo Bianconi wrote: > Introduce xdp_update_skb_shared_info routine to update frags array > metadata from a given xdp_buffer/xdp_frame. We do not need to reset > frags array since it is already initialized by the driver. > Rely on xdp_update_skb_shared_info in mvneta driver. Some more context here would really help. I had to jump into the mvneta driver to see what is happening. So as I read this we have a loop processing the descriptor in mvneta_rx_swbm() mvneta_rx_swbm() while (rx_proc < budget && rx_proc < rx_todo) { if (rx_status & MVNETA_RXD_FIRST_DESC) ... else { mvneta_swbm_add_rx_fragment() } .. if (!rx_status & MVNETA_RXD_LAST_DESC) continue; .. if (xdp_prog) mvneta_run_xdp(...) } roughly looking like above. First question, do you ever hit !MVNETA_RXD_LAST_DESC today? I assume this is avoided by hardware setup when XDP is enabled, otherwise _run_xdp() would be broken correct? Next question, given last descriptor bit logic whats the condition to hit the code added in this patch? wouldn't we need more than 1 descriptor and then we would skip the xdp_run... sorry lost me and its probably easier to let you give the flow vs spending an hour trying to track it down. But, in theory as you handle a hardware discriptor you can build up a set of pages using them to create a single skb rather than a skb per descriptor. But don't we know if pfmemalloc should be done while we are building the frag list? Can't se just set it vs this for loop in xdp_update_skb_shared_info(), > + for (i = 0; i < nr_frags; i++) { > + struct page *page = skb_frag_page(&sinfo->frags[i]); > + > + page = compound_head(page); > + if (page_is_pfmemalloc(page)) { > + skb->pfmemalloc = true; > + break; > + } > + } > +} ... > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c > index 361bc4fbe20b..abf2e50880e0 100644 > --- a/drivers/net/ethernet/marvell/mvneta.c > +++ b/drivers/net/ethernet/marvell/mvneta.c > @@ -2294,18 +2294,29 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, > rx_desc->buf_phys_addr = 0; > > if (data_len > 0 && xdp_sinfo->nr_frags < MAX_SKB_FRAGS) { > - skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags++]; > + skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo->nr_frags]; > > skb_frag_off_set(frag, pp->rx_offset_correction); > skb_frag_size_set(frag, data_len); > __skb_frag_set_page(frag, page); > + /* We don't need to reset pp_recycle here. It's already set, so > + * just mark fragments for recycling. > + */ > + page_pool_store_mem_info(page, rxq->page_pool); > + > + /* first fragment */ > + if (!xdp_sinfo->nr_frags) > + xdp_sinfo->gso_type = *size; Would be nice to also change 'int size' -> 'unsigned int size' so the types matched. Presumably you really can't have a negative size. Also how about giving gso_type a better name. xdp_sinfo->size maybe? > + xdp_sinfo->nr_frags++; > > /* last fragment */ > if (len == *size) { > struct skb_shared_info *sinfo; > > sinfo = xdp_get_shared_info_from_buff(xdp); > + sinfo->xdp_frags_tsize = xdp_sinfo->nr_frags * PAGE_SIZE; > sinfo->nr_frags = xdp_sinfo->nr_frags; > + sinfo->gso_type = xdp_sinfo->gso_type; > memcpy(sinfo->frags, xdp_sinfo->frags, > sinfo->nr_frags * sizeof(skb_frag_t)); > } Thanks, John