Re: [PATCH bpf-next v3 3/3] selftests/bpf: Add tests for arena spin lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 5 Mar 2025 at 03:14, Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> wrote:
>
> On Wed, 5 Mar 2025 at 03:04, Alexei Starovoitov
> <alexei.starovoitov@xxxxxxxxx> wrote:
> >
> > On Tue, Mar 4, 2025 at 5:18 PM Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> wrote:
> > >
> > > Add some basic selftests for qspinlock built over BPF arena using
> > > cond_break_label macro.
> > >
> > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx>
> > > ---
> > >  .../bpf/prog_tests/arena_spin_lock.c          | 102 ++++++++++++++++++
> > >  .../selftests/bpf/progs/arena_spin_lock.c     |  51 +++++++++
> > >  2 files changed, 153 insertions(+)
> > >  create mode 100644 tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/arena_spin_lock.c
> > >
> > > diff --git a/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c b/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c
> > > new file mode 100644
> > > index 000000000000..2cc078ed1ddb
> > > --- /dev/null
> > > +++ b/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c
> > > @@ -0,0 +1,102 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
> > > +#include <test_progs.h>
> > > +#include <network_helpers.h>
> > > +#include <sys/sysinfo.h>
> > > +
> > > +struct qspinlock { int val; };
> > > +typedef struct qspinlock arena_spinlock_t;
> > > +
> > > +struct arena_qnode {
> > > +       unsigned long next;
> > > +       int count;
> > > +       int locked;
> > > +};
> > > +
> > > +#include "arena_spin_lock.skel.h"
> > > +
> > > +static long cpu;
> > > +int *counter;
> > > +
> > > +static void *spin_lock_thread(void *arg)
> > > +{
> > > +       int err, prog_fd = *(u32 *)arg;
> > > +       LIBBPF_OPTS(bpf_test_run_opts, topts,
> > > +               .data_in = &pkt_v4,
> > > +               .data_size_in = sizeof(pkt_v4),
> > > +               .repeat = 1,
> > > +       );
> >
> > Why bother with 'tc' prog type?
> > Pick syscall type, and above will be shorter:
> > LIBBPF_OPTS(bpf_test_run_opts, opts);
> >
>
> Ack.

Sadly, syscall prog_test_run doesn't support 'repeat' field, so we'll
have to stick with tc.

>
> > > +       cpu_set_t cpuset;
> > > +
> > > +       CPU_ZERO(&cpuset);
> > > +       CPU_SET(__sync_fetch_and_add(&cpu, 1), &cpuset);
> > > +       ASSERT_OK(pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset), "cpu affinity");
> > > +
> > > +       while (*READ_ONCE(counter) <= 1000) {
> >
> > READ_ONCE(*counter) ?
> >
> > but why add this user->kernel switch overhead.
> > Use .repeat = 1000
> > one bpf_prog_test_run_opts()
> > and check at the end that *counter == 1000 ?
>
> Ok.
>

One of the reasons to do this was to give other threads time to be
able to catch up with the first threads as it starts storming through
the counter increments.
So in case of short section there is no contention at all and most
code paths don't get triggered.
But I'll use a pthread_barrier instead.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux