On Mon, Aug 29, 2022 at 2:47 PM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote: > > On 8/26/22 4:44 AM, Alexei Starovoitov wrote: > > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > > > The atomic_inc/dec might cause extreme cache line bouncing when multiple cpus > > access the same bpf map. Based on specified max_entries for the hash map > > calculate when percpu_counter becomes faster than atomic_t and use it for such > > maps. For example samples/bpf/map_perf_test is using hash map with max_entries > > 1000. On a system with 16 cpus the 'map_perf_test 4' shows 14k events per > > second using atomic_t. On a system with 15 cpus it shows 100k events per second > > using percpu. map_perf_test is an extreme case where all cpus colliding on > > atomic_t which causes extreme cache bouncing. Note that the slow path of > > percpu_counter is 5k events per secound vs 14k for atomic, so the heuristic is > > necessary. See comment in the code why the heuristic is based on > > num_online_cpus(). > > nit: Could we include this logic inside percpu_counter logic, or as an extended > version of it? Except the heuristic of attr->max_entries / 2 > num_online_cpus() * > PERCPU_COUNTER_BATCH which toggles between plain atomic vs percpu_counter, the > rest feel generic enough that it could also be applicable outside bpf. The heuristic is probably not generic enough and this optimization is a stop gap. It helps many cases, but doesn't solve all. It's ok for this specific large hash map to count max_entries, but we shouldn't claim generality to suggest this heuristic to anyone else. I was thinking to do a follow up and create a true generic combined percpu and atomic counter, similar to percpu_ref that can switch from percpu to atomic. But it's more of a wish-list task atm.