Re: [PATCH v2 net-next 1/2] net: veth: add page_pool for page recycling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 2023/4/24 21:10, Maciej Fijalkowski wrote:
> >>> There was a discussion in the past to reduce XDP_PACKET_HEADROOM to 192B but
> >>> this is not merged yet and it is not related to this series. We can address
> >>> your comments in a follow-up patch when XDP_PACKET_HEADROOM series is merged.
> > 
> > Intel drivers still work just fine at 192 headroom and split the page but
> > it makes it problematic for BIG TCP where MAX_SKB_FRAGS from shinfo needs
> 
> I am not sure why we are not enabling skb_shinfo(skb)->frag_list to support
> BIG TCP instead of increasing MAX_SKB_FRAGS, perhaps there was some disscution
> about this in the past I am not aware of?
> 
> > to be increased. So it's the tailroom that becomes the bottleneck, not the
> > headroom. I believe at some point we will convert our drivers to page_pool
> > with full 4k page dedicated for a single frame.
> 
> Can we use header splitting to ensure there is enough tailroom for
> napi_build_skb() or xdp_frame with shinfo?
> 

since veth_convert_skb_to_xdp_buff() runs in veth_poll() I think we can use
napi_build_skb(). I tested it and we get an improvement (9.65Gbps vs 9.2Gbps
for 1500B frames).

Regards,
Lorenzo

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux