Re: [PATCH v2 net-next 1/2] net: veth: add page_pool for page recycling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2023/4/23 22:20, Lorenzo Bianconi wrote:
>> On 2023/4/23 2:54, Lorenzo Bianconi wrote:
>>>  struct veth_priv {
>>> @@ -727,17 +729,20 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
>>>  			goto drop;
>>>  
>>>  		/* Allocate skb head */
>>> -		page = alloc_page(GFP_ATOMIC | __GFP_NOWARN);
>>> +		page = page_pool_dev_alloc_pages(rq->page_pool);
>>>  		if (!page)
>>>  			goto drop;
>>>  
>>>  		nskb = build_skb(page_address(page), PAGE_SIZE);
>>
>> If page pool is used with PP_FLAG_PAGE_FRAG, maybe there is some additional
>> improvement for the MTU 1500B case, it seem a 4K page is able to hold two skb.
>> And we can reduce the memory usage too, which is a significant saving if page
>> size is 64K.
> 
> please correct if I am wrong but I think the 1500B MTU case does not fit in the
> half-page buffer size since we need to take into account VETH_XDP_HEADROOM.
> In particular:
> 
> - VETH_BUF_SIZE = 2048
> - VETH_XDP_HEADROOM = 256 + 2 = 258

On some arch the NET_IP_ALIGN is zero.

I suppose XDP_PACKET_HEADROOM are for xdp_frame and data_meta, it seems
xdp_frame is only 40 bytes for 64 bit arch and max size of metalen is 32
as xdp_metalen_invalid() suggest, is there any other reason why we need
256 bytes here?

> - max_headsize = SKB_WITH_OVERHEAD(VETH_BUF_SIZE - VETH_XDP_HEADROOM) = 1470
> 
> Even in this case we will need the consume a full page. In fact, performances
> are a little bit worse:
> 
> MTU 1500: tcp throughput ~ 8.3Gbps
> 
> Do you agree or am I missing something?
> 
> Regards,
> Lorenzo



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux