Re: [PATCH v9 bpf-next 02/14] xdp: introduce flags field in xdp_buff/xdp_frame

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Mon, Jul 5, 2021 at 8:52 AM Lorenzo Bianconi
> <lorenzo.bianconi@xxxxxxxxxx> wrote:
> >
> > > On Tue, Jun 29, 2021 at 5:43 AM Lorenzo Bianconi
> > > <lorenzo.bianconi@xxxxxxxxxx> wrote:

[...]
> 
> Hi Lorenzo,
> 
> What about doing something like breaking up the type value in
> xdp_mem_info? The fact is having it as an enum doesn't get us much
> since we have a 32b type field but are only storing 4 possible values
> there currently
> 
> The way I see it, scatter-gather is just another memory model
> attribute rather than being something entirely new. It makes as much
> sense to have a bit there for MEM_TYPE_PAGE_SG as it does for
> MEM_TYPE_PAGE_SHARED. I would consider either splitting the type field
> into two 16b fields. For example you might have one field that
> describes the source pool which is currently either allocated page
> (ORDER0, SHARED), page_pool (PAGE_POOL), or XSK pool (XSK_BUFF_POOL),
> and then two flags for type with there being either shared and/or
> scatter-gather.

Hi Alex,

I am fine reducing the xdp_mem_info size defining type field as u16 instead of
u32 but I think mb is a per-xdp_buff/xdp_frame property since at runtime we can
receive a tiny single page xdp_buff/xdp_frame and a "jumbo" xdp_buff/xdp_frame
composed by multiple pages. According to the documentation available in
include/net/xdp.h, xdp_rxq_info (where xdp_mem_info is contained for xdp_buff) 
is "associated with the driver level RX-ring queues and it is information that
is specific to how the driver have configured a given RX-ring queue" so I guess
it is a little bit counterintuitive to add this info there.
Moreover we have the "issue" for devmap in dev_map_bpf_prog_run() when we
perform XDP_REDIRECT with the approach you proposed and last we can reuse this
new flags filed for XDP hw-hints support.
What about reducing xdp_mem_info and add the flags field in xdp_buff/xdp_frame
in order to avoid increasing the xdp_buff/xdp_frame size? Am I missing
something?

Regards,
Lorenzo

> 
> Also, looking over the code I don't see any reason why current
> ORDER0/SHARED couldn't be merged as the free paths are essentially
> identical since the MEM_TYPE_PAGE_SHARED path would function perfectly
> fine to free MEM_TYPE_PAGE_ORDER0 pages.
> 
> Thanks,
> 
> - Alex
> 

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux