Re: [RFC bpf-next v2 1/4] selftests/bpf: Add benchmark for bpf memory allocator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Apr 23, 2023 at 09:55:24AM +0800, Hou Tao wrote:
> >
> >> ./bench htab-mem --use-case $name --max-entries 16384 \
> >> 	--full 50 -d 7 -w 3 --producers=8 --prod-affinity=0-7
> >>
> >> | name                | loop (k/s) | average memory (MiB) | peak memory (MiB) |
> >> | --                  | --         | --                   | --                |
> >> | no_op               | 1129       | 1.15                 | 1.15              |
> >> | overwrite           | 24.37      | 2.07                 | 2.97              |
> >> | batch_add_batch_del | 10.58      | 2.91                 | 3.36              |
> >> | add_del_on_diff_cpu | 13.14      | 380.66               | 633.99            |
> > large mem for diff_cpu case needs to be investigated.
> The main reason is that tasks-trace RCU GP is slow and there is only one
> inflight free callback, so the CPUs which only do element addition will allocate
> new memory from slab continuously and the CPUs which only do element deletion
> will free these elements continuously through call_tasks_trace_rcu(), but due to
> the slowness of tasks-trace RCU GP, these freed elements could not be freed back
> to slab subsystem timely.

I see. Now it makes sense. It's slow call_tasks_trace_rcu and not at all "memory can never be reused."
Please explain things clearly in commit log.

> >> +{
> >> +	__u64 *value;
> >> +
> >> +	if (ctx->from >= ctx->max)
> >> +		return 1;
> >> +
> >> +	value = bpf_map_lookup_elem(&array, &ctx->from);
> >> +	if (value)
> >> +		bpf_map_update_elem(&htab, &ctx->from, value, flags);
> > What is a point of doing lookup from giant array of en element with zero value
> > to copy it into htab?
> > Why not to use single zero inited elem for all htab ops?
> I want to check how does the different size of value effect the benchmark
> result, so I choose a variable-size value.

Not following. All elements of the array have the same size.
Are you saying you were not able to figure out how to supply a single 'value'
of variable size? Try array of max_entries=1.
Do not do unnecessary and confusing bpf_map_lookup_elem(&array, &ctx->from);.

> >
> > Each loop will run 16k times and every time you step += 4.
> > So 3/4 of these 16k runs it will be hitting if (ctx->from >= ctx->max) condition.
> > What are you measuring?
> As explained in the commit message, I am trying to let different deletion and
> deletion CPU pairs operate on the different subsets of hash-table elements.
> Assuming there are 16 elements in the htab and there are 8 CPUs and 8 threads,
> the following is the operation subset for each CPU:
> 
> CPU 0:  0 4 8 12 (do deletion)
> CPU 1:  0 4 8 12 (do addition)
> 
> CPU 2:  1 5 9 13
> CPU 3:  1 5 9 13
> 
> CPU 4:  2 6 10 14
> CPU 5:  2 6 10 14
> 
> CPU 6:  3 7 11 15
> CPU 7:  3 7 11 15

That part is clear, but

> >> +	__sync_fetch_and_add(&loop_cnt, 1);

this doesn't match the rest. loop_cnt is inremented 4 times faster.
So it's not comparable to other tests.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux