Re: [PATCH v9 bpf-next 02/14] xdp: introduce flags field in xdp_buff/xdp_frame

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Tue, Jul 6, 2021 at 4:53 AM Lorenzo Bianconi
> <lorenzo.bianconi@xxxxxxxxxx> wrote:
> >
> > > On Mon, Jul 5, 2021 at 8:52 AM Lorenzo Bianconi
> > > <lorenzo.bianconi@xxxxxxxxxx> wrote:
> > > >
> > > > > On Tue, Jun 29, 2021 at 5:43 AM Lorenzo Bianconi
> > > > > <lorenzo.bianconi@xxxxxxxxxx> wrote:
> >
> > [...]
> > >
> > > Hi Lorenzo,
> > >
> > > What about doing something like breaking up the type value in
> > > xdp_mem_info? The fact is having it as an enum doesn't get us much
> > > since we have a 32b type field but are only storing 4 possible values
> > > there currently
> > >
> > > The way I see it, scatter-gather is just another memory model
> > > attribute rather than being something entirely new. It makes as much
> > > sense to have a bit there for MEM_TYPE_PAGE_SG as it does for
> > > MEM_TYPE_PAGE_SHARED. I would consider either splitting the type field
> > > into two 16b fields. For example you might have one field that
> > > describes the source pool which is currently either allocated page
> > > (ORDER0, SHARED), page_pool (PAGE_POOL), or XSK pool (XSK_BUFF_POOL),
> > > and then two flags for type with there being either shared and/or
> > > scatter-gather.
> >
> > Hi Alex,
> >
> > I am fine reducing the xdp_mem_info size defining type field as u16 instead of
> > u32 but I think mb is a per-xdp_buff/xdp_frame property since at runtime we can
> > receive a tiny single page xdp_buff/xdp_frame and a "jumbo" xdp_buff/xdp_frame
> > composed by multiple pages. According to the documentation available in
> > include/net/xdp.h, xdp_rxq_info (where xdp_mem_info is contained for xdp_buff)
> > is "associated with the driver level RX-ring queues and it is information that
> > is specific to how the driver have configured a given RX-ring queue" so I guess
> > it is a little bit counterintuitive to add this info there.
> 
> It isn't really all that counterintuitive. However it does put the
> onus on the driver to be consistent about things. So even a
> single-buffer xdp_buff would technically have to be a scatter-gather
> buff, but it would have no fragments in it. So the requirement would
> be to initialize the frags and data_len fields to 0 for all xdp_buff
> structures.

nr_frags and data_len are currently defined in skb_shared_info(xdp_buff)
so I guess initialize them to 0 will trigger a cache miss (in fact we
introduced the mb bit just to avoid this initialization and introduce
penalties for legacy single-buffer use-case). Do you mean to have these
fields in xdp_buff/xdp_frame structure?

> 
> > Moreover we have the "issue" for devmap in dev_map_bpf_prog_run() when we
> > perform XDP_REDIRECT with the approach you proposed and last we can reuse this
> > new flags filed for XDP hw-hints support.
> > What about reducing xdp_mem_info and add the flags field in xdp_buff/xdp_frame
> > in order to avoid increasing the xdp_buff/xdp_frame size? Am I missing
> > something?
> 
> The problem is there isn't a mem_info field in the xdp_buff. It is in
> the Rx queue info structure.

yes, this is what I meant :)

Regards,
Lorenzo

> 
> Thanks,
> 
> - Alex
> 

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux