Re: [PATCH v5 bpf-next 10/24] xsk: add new netlink attribute dedicated for ZC max frags

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 13, 2023 at 11:59 AM Maciej Fijalkowski
<maciej.fijalkowski@xxxxxxxxx> wrote:
>
> On Mon, Jul 10, 2023 at 06:09:28PM -0700, Alexei Starovoitov wrote:
> > On Thu, Jul 6, 2023 at 1:47 PM Maciej Fijalkowski
> > <maciej.fijalkowski@xxxxxxxxx> wrote:
> > >
> > > Introduce new netlink attribute NETDEV_A_DEV_XDP_ZC_MAX_SEGS that will
> > > carry maximum fragments that underlying ZC driver is able to handle on
> > > TX side. It is going to be included in netlink response only when driver
> > > supports ZC. Any value higher than 1 implies multi-buffer ZC support on
> > > underlying device.
> > >
> > > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@xxxxxxxxx>
> >
> > I suspect something in this patch makes XDP bonding test fail.
> > See BPF CI.
> >
> > I can reproduce the failure locally as well.
> > test_progs -t bond
> > works without the series and fails with them.
>
> Hi Alexei,
>
> this fails on second bpf_xdp_query() call due to non-zero (?) contents at the
> end of bpf_xdp_query_opts struct - currently it looks as following:
>
> $ pahole -C bpf_xdp_query_opts libbpf.so
> struct bpf_xdp_query_opts {
>         size_t                     sz;                   /*     0     8 */
>         __u32                      prog_id;              /*     8     4 */
>         __u32                      drv_prog_id;          /*    12     4 */
>         __u32                      hw_prog_id;           /*    16     4 */
>         __u32                      skb_prog_id;          /*    20     4 */
>         __u8                       attach_mode;          /*    24     1 */
>
>         /* XXX 7 bytes hole, try to pack */
>
>         __u64                      feature_flags;        /*    32     8 */
>         __u32                      xdp_zc_max_segs;      /*    40     4 */
>
>         /* size: 48, cachelines: 1, members: 8 */
>         /* sum members: 37, holes: 1, sum holes: 7 */
>         /* padding: 4 */
>         /* last cacheline: 48 bytes */
> };
>
> Fix is either to move xdp_zc_max_segs up to existing hole or to zero out
> struct before bpf_xdp_query() calls, like:
>
>         memset(&query_opts, 0, sizeof(struct bpf_xdp_query_opts));
>         query_opts.sz = sizeof(struct bpf_xdp_query_opts);

Right. That would be good to have to clear the hole,
but probably unrelated.

> I am kinda confused as this is happening due to two things. First off
> bonding driver sets its xdp_features to NETDEV_XDP_ACT_MASK and in turn
> this implies ZC feature enabled which makes xdp_zc_max_segs being included
> in the response (it's value is 1 as it's the default).
>
> Then, offsetofend(struct type, type##__last_field) that is used as one of
> libbpf_validate_opts() args gives me 40 but bpf_xdp_query_opts::sz has
> stored 48, so in the end we go through the last 8 bytes in
> libbpf_is_mem_zeroed() and we hit the '1' from xdp_zc_max_segs.

Because this patch didn't update
bpf_xdp_query_opts__last_field

It added a new field, but didn't update the macro.

> So, (silly) questions:
> - why bonding driver defaults to all features enabled?

doesn't really matter in this context.

> - why __last_field does not recognize xdp_zc_max_segs at the end?

because the patch didn't update it :)

> Besides, I think i'll move xdp_zc_max_segs above to the hole. This fixes
> the bonding test for me.

No. Keep it at the end.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux