On Thu, Sep 17, 2020 at 8:46 AM Yonghong Song <yhs@xxxxxx> wrote: > > If a bucket contains a lot of sockets, during bpf_iter traversing > a bucket, concurrent userspace bpf_map_update_elem() and > bpf program bpf_sk_storage_{get,delete}() may experience > some undesirable delays as they will compete with bpf_iter > for bucket lock. > > Note that the number of buckets for bpf_sk_storage_map > is roughly the same as the number of cpus. So if there > are lots of sockets in the system, each bucket could > contain lots of sockets. > > Different actual use cases may experience different delays. > Here, using selftest bpf_iter subtest bpf_sk_storage_map, > I hacked the kernel with ktime_get_mono_fast_ns() > to collect the time when a bucket was locked > during bpf_iter prog traversing that bucket. This way, > the maximum incurred delay was measured w.r.t. the > number of elements in a bucket. > # elems in each bucket delay(ns) > 64 17000 > 256 72512 > 2048 875246 > > The potential delays will be further increased if > we have even more elemnts in a bucket. Using rcu_read_lock() > is a reasonable compromise here. It may lose some precision, e.g., > access stale sockets, but it will not hurt performance of > bpf program or user space application which also tries > to get/delete or update map elements. > > Cc: Martin KaFai Lau <kafai@xxxxxx> > Acked-by: Song Liu <songliubraving@xxxxxx> > Signed-off-by: Yonghong Song <yhs@xxxxxx> > --- > net/core/bpf_sk_storage.c | 31 +++++++++++++------------------ > 1 file changed, 13 insertions(+), 18 deletions(-) > > Changelog: > v3 -> v4: > - use rcu_dereference/hlist_next_rcu for hlist_entry_safe. (Martin) Applied. Thanks