Re: [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 05, 2023 at 11:03:25AM +0800, Hou Tao wrote:
> Hi,
> 
> On 7/4/2023 11:02 PM, Anton Protopopov wrote:
> > On Tue, Jul 04, 2023 at 10:41:10PM +0800, Hou Tao wrote:
> >> Hi,
> >>
> >> On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> >>> Add a new map test, map_percpu_stats.c, which is checking the correctness of
> >>> map's percpu elements counters.  For supported maps the test upserts a number
> >>> of elements, checks the correctness of the counters, then deletes all the
> >>> elements and checks again that the counters sum drops down to zero.
> >>>
> >>> The following map types are tested:
> >>>
> >>>     * BPF_MAP_TYPE_HASH, BPF_F_NO_PREALLOC
> >>>     * BPF_MAP_TYPE_PERCPU_HASH, BPF_F_NO_PREALLOC
> >>>     * BPF_MAP_TYPE_HASH,
> >>>     * BPF_MAP_TYPE_PERCPU_HASH,
> >>>     * BPF_MAP_TYPE_LRU_HASH
> >>>     * BPF_MAP_TYPE_LRU_PERCPU_HASH
> >> A test for BPF_MAP_TYPE_HASH_OF_MAPS is also needed.
> We could also exercise the test for LRU map with BPF_F_NO_COMMON_LRU.

Thanks, added.

> >
> SNIP
> >>> diff --git a/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> >>> new file mode 100644
> >>> index 000000000000..5b45af230368
> >>> --- /dev/null
> >>> +++ b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> >>> @@ -0,0 +1,336 @@
> >>> +// SPDX-License-Identifier: GPL-2.0
> >>> +/* Copyright (c) 2023 Isovalent */
> >>> +
> >>> +#include <errno.h>
> >>> +#include <unistd.h>
> >>> +#include <pthread.h>
> >>> +
> >>> +#include <bpf/bpf.h>
> >>> +#include <bpf/libbpf.h>
> >>> +
> >>> +#include <bpf_util.h>
> >>> +#include <test_maps.h>
> >>> +
> >>> +#include "map_percpu_stats.skel.h"
> >>> +
> >>> +#define MAX_ENTRIES 16384
> >>> +#define N_THREADS 37
> >> Why 37 thread is needed here ? Does a small number of threads work as well ?
> > This was used to evict more elements from LRU maps when they are full.
> 
> I see. But in my understanding, for the global LRU list, the eviction
> (the invocation of htab_lru_map_delete_node) will be possible if the
> free element is less than LOCAL_FREE_TARGET(128) * nr_running_cpus. Now
> the number of free elements is 1000 as defined in __test(), the number
> of vCPU is 8 in my local VM setup (BPF CI also uses 8 vCPUs) and it is
> hard to trigger the eviction because 8 * 128 is roughly equal with 1000.
> So I suggest to decrease the number of free elements to 512 and the
> number of threads to 8, or adjust the number of running thread and free
> elements according to the number of online CPUs.

Yes, makes sense. I've changed the test to use 8 threads and offset of 512.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux