[...] > > I did some experiments using page_frag_cache/page_frag_alloc() instead of > > page_pools in a simple environment I used to test XDP for veth driver. > > In particular, I allocate a new buffer in veth_convert_skb_to_xdp_buff() from > > the page_frag_cache in order to copy the full skb in the new one, actually > > "linearizing" the packet (since we know the original skb length). > > I run an iperf TCP connection over a veth pair where the > > remote device runs the xdp_rxq_info sample (available in the kernel source > > tree, with action XDP_PASS): > > > > TCP clietn -- v0 === v1 (xdp_rxq_info) -- TCP server > > > > net-next (page_pool): > > - MTU 1500B: ~ 7.5 Gbps > > - MTU 8000B: ~ 15.3 Gbps > > > > net-next + page_frag_alloc: > > - MTU 1500B: ~ 8.4 Gbps > > - MTU 8000B: ~ 14.7 Gbps > > > > It seems there is no a clear "win" situation here (at least in this environment > > and we this simple approach). Moreover: > > For the 1500B packets it is a win, but for 8000B it looks like there > is a regression. Any idea what is causing it? nope, I have not looked into it yet. > > > - can the linearization introduce any issue whenever we perform XDP_REDIRECT > > into a destination device? > > It shouldn't. If it does it would probably point to an issue w/ the > destination driver rather than an issue with the code doing this. ack, fine. > > > - can the page_frag_cache introduce more memory fragmentation (IIRC we were > > experiencing this issue in mt76 before switching to page_pools). > > I think it largely depends on where the packets are ending up. I know > this is the approach we are using for sockets, see > skb_page_frag_refill(). If nothing else, if you took a similar > approach to it you might be able to bypass the need for the > page_frag_cache itself, although you would likely still end up > allocating similar structures. ack. Regards, Lorenzo
Attachment:
signature.asc
Description: PGP signature