Re: [xdp-hints] Re: [PATCH bpf-next 05/11] veth: Support rx timestamp metadata for xdp

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Stanislav Fomichev wrote:
> On Tue, Nov 15, 2022 at 2:46 PM Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote:
> >
> > Stanislav Fomichev <sdf@xxxxxxxxxx> writes:
> >
> > > On Tue, Nov 15, 2022 at 8:17 AM Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote:
> > >>
> > >> Stanislav Fomichev <sdf@xxxxxxxxxx> writes:
> > >>
> > >> > The goal is to enable end-to-end testing of the metadata
> > >> > for AF_XDP. Current rx_timestamp kfunc returns current
> > >> > time which should be enough to exercise this new functionality.
> > >> >
> > >> > Cc: John Fastabend <john.fastabend@xxxxxxxxx>
> > >> > Cc: David Ahern <dsahern@xxxxxxxxx>
> > >> > Cc: Martin KaFai Lau <martin.lau@xxxxxxxxx>
> > >> > Cc: Jakub Kicinski <kuba@xxxxxxxxxx>
> > >> > Cc: Willem de Bruijn <willemb@xxxxxxxxxx>
> > >> > Cc: Jesper Dangaard Brouer <brouer@xxxxxxxxxx>
> > >> > Cc: Anatoly Burakov <anatoly.burakov@xxxxxxxxx>
> > >> > Cc: Alexander Lobakin <alexandr.lobakin@xxxxxxxxx>
> > >> > Cc: Magnus Karlsson <magnus.karlsson@xxxxxxxxx>
> > >> > Cc: Maryam Tahhan <mtahhan@xxxxxxxxxx>
> > >> > Cc: xdp-hints@xxxxxxxxxxxxxxx
> > >> > Cc: netdev@xxxxxxxxxxxxxxx
> > >> > Signed-off-by: Stanislav Fomichev <sdf@xxxxxxxxxx>
> > >> > ---
> > >> >  drivers/net/veth.c | 14 ++++++++++++++
> > >> >  1 file changed, 14 insertions(+)
> > >> >
> > >> > diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> > >> > index 2a4592780141..c626580a2294 100644
> > >> > --- a/drivers/net/veth.c
> > >> > +++ b/drivers/net/veth.c
> > >> > @@ -25,6 +25,7 @@
> > >> >  #include <linux/filter.h>
> > >> >  #include <linux/ptr_ring.h>
> > >> >  #include <linux/bpf_trace.h>
> > >> > +#include <linux/bpf_patch.h>
> > >> >  #include <linux/net_tstamp.h>
> > >> >
> > >> >  #define DRV_NAME     "veth"
> > >> > @@ -1659,6 +1660,18 @@ static int veth_xdp(struct net_device *dev, struct netdev_bpf *xdp)
> > >> >       }
> > >> >  }
> > >> >
> > >> > +static void veth_unroll_kfunc(const struct bpf_prog *prog, u32 func_id,
> > >> > +                           struct bpf_patch *patch)
> > >> > +{
> > >> > +     if (func_id == xdp_metadata_kfunc_id(XDP_METADATA_KFUNC_RX_TIMESTAMP_SUPPORTED)) {
> > >> > +             /* return true; */
> > >> > +             bpf_patch_append(patch, BPF_MOV64_IMM(BPF_REG_0, 1));
> > >> > +     } else if (func_id == xdp_metadata_kfunc_id(XDP_METADATA_KFUNC_RX_TIMESTAMP)) {
> > >> > +             /* return ktime_get_mono_fast_ns(); */
> > >> > +             bpf_patch_append(patch, BPF_EMIT_CALL(ktime_get_mono_fast_ns));
> > >> > +     }
> > >> > +}
> > >>
> > >> So these look reasonable enough, but would be good to see some examples
> > >> of kfunc implementations that don't just BPF_CALL to a kernel function
> > >> (with those helper wrappers we were discussing before).
> > >
> > > Let's maybe add them if/when needed as we add more metadata support?
> > > xdp_metadata_export_to_skb has an example, and rfc 1/2 have more
> > > examples, so it shouldn't be a problem to resurrect them back at some
> > > point?
> >
> > Well, the reason I asked for them is that I think having to maintain the
> > BPF code generation in the drivers is probably the biggest drawback of
> > the kfunc approach, so it would be good to be relatively sure that we
> > can manage that complexity (via helpers) before we commit to this :)
> 
> Right, and I've added a bunch of examples in v2 rfc so we can judge
> whether that complexity is manageable or not :-)
> Do you want me to add those wrappers you've back without any real users?
> Because I had to remove my veth tstamp accessors due to John/Jesper
> objections; I can maybe bring some of this back gated by some
> static_branch to avoid the fastpath cost?

I missed the context a bit what did you mean "would be good to see some
examples of kfunc implementations that don't just BPF_CALL to a kernel
function"? In this case do you mean BPF code directly without the call?

Early on I thought we should just expose the rx_descriptor which would
be roughly the same right? (difference being code embedded in driver vs
a lib) Trouble I ran into is driver code using seqlock_t and mutexs
which wasn't as straight forward as the simpler just read it from
the descriptor. For example in mlx getting the ts would be easy from
BPF with the mlx4_cqe struct exposed

u64 mlx4_en_get_cqe_ts(struct mlx4_cqe *cqe)
{
        u64 hi, lo;
        struct mlx4_ts_cqe *ts_cqe = (struct mlx4_ts_cqe *)cqe;

        lo = (u64)be16_to_cpu(ts_cqe->timestamp_lo);
        hi = ((u64)be32_to_cpu(ts_cqe->timestamp_hi) + !lo) << 16;

        return hi | lo;
}

but converting that to nsec is a bit annoying,

void mlx4_en_fill_hwtstamps(struct mlx4_en_dev *mdev,
                            struct skb_shared_hwtstamps *hwts,
                            u64 timestamp)
{
        unsigned int seq;
        u64 nsec;

        do {
                seq = read_seqbegin(&mdev->clock_lock);
                nsec = timecounter_cyc2time(&mdev->clock, timestamp);
        } while (read_seqretry(&mdev->clock_lock, seq));

        memset(hwts, 0, sizeof(struct skb_shared_hwtstamps));
        hwts->hwtstamp = ns_to_ktime(nsec);
}

I think the nsec is what you really want.

With all the drivers doing slightly different ops we would have
to create read_seqbegin, read_seqretry, mutex_lock, ... to get
at least the mlx and ice drivers it looks like we would need some
more BPF primitives/helpers. Looks like some more work is needed
on ice driver though to get rx tstamps on all packets.

Anyways this convinced me real devices will probably use BPF_CALL
and not BPF insns directly.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux