Re: [PATCH v4 bpf-next 5/6] selftests/bpf: test map percpu stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 7/6/2023 12:01 AM, Anton Protopopov wrote:
> Add a new map test, map_percpu_stats.c, which is checking the correctness of
> map's percpu elements counters.  For supported maps the test upserts a number
> of elements, checks the correctness of the counters, then deletes all the
> elements and checks again that the counters sum drops down to zero.
>
> The following map types are tested:
>
>     * BPF_MAP_TYPE_HASH, BPF_F_NO_PREALLOC
>     * BPF_MAP_TYPE_PERCPU_HASH, BPF_F_NO_PREALLOC
>     * BPF_MAP_TYPE_HASH,
>     * BPF_MAP_TYPE_PERCPU_HASH,
>     * BPF_MAP_TYPE_LRU_HASH
>     * BPF_MAP_TYPE_LRU_PERCPU_HASH
>     * BPF_MAP_TYPE_LRU_HASH, BPF_F_NO_COMMON_LRU
>     * BPF_MAP_TYPE_LRU_PERCPU_HASH, BPF_F_NO_COMMON_LRU
>     * BPF_MAP_TYPE_HASH_OF_MAPS
>
> Signed-off-by: Anton Protopopov <aspsk@xxxxxxxxxxxxx>

Acked-by: Hou Tao <houtao1@xxxxxxxxxx>

With two nits below.
> +
> +static const char *map_type_to_s(__u32 type)
> +{
> +	switch (type) {
> +	case BPF_MAP_TYPE_HASH:
> +		return "HASH";
> +	case BPF_MAP_TYPE_PERCPU_HASH:
> +		return "PERCPU_HASH";
> +	case BPF_MAP_TYPE_LRU_HASH:
> +		return "LRU_HASH";
> +	case BPF_MAP_TYPE_LRU_PERCPU_HASH:
> +		return "LRU_PERCPU_HASH";
> +	default:
> +		return "<define-me>";
> +	}
Missing BPF_MAP_TYPE_HASH_OF_MAPS ?
> +}
> +
> +static __u32 map_count_elements(__u32 type, int map_fd)
> +{
> +	__u32 key = -1;
> +	int n = 0;
> +
> +	while (!bpf_map_get_next_key(map_fd, &key, &key))
> +		n++;
> +	return n;
> +}
> +
> +#define BATCH	true
> +
> +static void delete_and_lookup_batch(int map_fd, void *keys, __u32 count)
> +{
> +	static __u8 values[(8 << 10) * MAX_ENTRIES];
> +	void *in_batch = NULL, *out_batch;
> +	__u32 save_count = count;
> +	int ret;
> +
> +	ret = bpf_map_lookup_and_delete_batch(map_fd,
> +					      &in_batch, &out_batch,
> +					      keys, values, &count,
> +					      NULL);
> +
> +	/*
> +	 * Despite what uapi header says, lookup_and_delete_batch will return
> +	 * -ENOENT in case we successfully have deleted all elements, so check
> +	 * this separately
> +	 */

It seems it is a bug in __htab_map_lookup_and_delete_batch(). I could
post a patch to fix it if you don't plan to do that by yourself.
> +	CHECK(ret < 0 && (errno != ENOENT || !count), "bpf_map_lookup_and_delete_batch",
> +		       "error: %s\n", strerror(errno));
> +
> +	CHECK(count != save_count,
> +			"bpf_map_lookup_and_delete_batch",
> +			"deleted not all elements: removed=%u expected=%u\n",
> +			count, save_count);
> +}
> +
SNIP
> +static __u32 get_cur_elements(int map_id)
> +{
> +	LIBBPF_OPTS(bpf_iter_attach_opts, opts);
> +	union bpf_iter_link_info linfo;
> +	struct map_percpu_stats *skel;
> +	struct bpf_link *link;
> +	__u32 n_elements;
> +	int iter_fd;
> +	int ret;
> +
> +	opts.link_info = &linfo;
> +	opts.link_info_len = sizeof(linfo);
> +
> +	skel = map_percpu_stats__open();
> +	CHECK(skel == NULL, "map_percpu_stats__open", "error: %s", strerror(errno));
> +
> +	skel->bss->target_id = map_id;
> +
> +	ret = map_percpu_stats__load(skel);
> +	CHECK(ret != 0, "map_percpu_stats__load", "error: %s", strerror(errno));
> +
> +	link = bpf_program__attach_iter(skel->progs.dump_bpf_map, &opts);

Instead of passing a uninitialized opts, I think using NULL will be fine
here because there is no option for bpf map iterator now.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux