Re: [BPF PATCH for-next] cgroup/bpf: fast path for not loaded skb BPF filtering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/11/21 00:38, Martin KaFai Lau wrote:
On Fri, Dec 10, 2021 at 02:23:34AM +0000, Pavel Begunkov wrote:
cgroup_bpf_enabled_key static key guards from overhead in cases where
no cgroup bpf program of a specific type is loaded in any cgroup. Turn
out that's not always good enough, e.g. when there are many cgroups but
ones that we're interesting in are without bpf. It's seen in server
environments, but the problem seems to be even wider as apparently
systemd loads some BPF affecting my laptop.

Profiles for small packet or zerocopy transmissions over fast network
show __cgroup_bpf_run_filter_skb() taking 2-3%, 1% of which is from
migrate_disable/enable(), and similarly on the receiving side. Also
got +4-5% of t-put for local testing.

Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx>
---
  include/linux/bpf-cgroup.h | 24 +++++++++++++++++++++---
  kernel/bpf/cgroup.c        | 23 +++++++----------------
  2 files changed, 28 insertions(+), 19 deletions(-)

diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index 11820a430d6c..99b01201d7db 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -141,6 +141,9 @@ struct cgroup_bpf {
  	struct list_head progs[MAX_CGROUP_BPF_ATTACH_TYPE];
  	u32 flags[MAX_CGROUP_BPF_ATTACH_TYPE];
+ /* for each type tracks whether effective prog array is not empty */
+	unsigned long enabled_mask;
+
  	/* list of cgroup shared storages */
  	struct list_head storages;
@@ -219,11 +222,25 @@ int bpf_percpu_cgroup_storage_copy(struct bpf_map *map, void *key, void *value);
  int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
  				     void *value, u64 flags);
+static inline bool __cgroup_bpf_type_enabled(struct cgroup_bpf *cgrp_bpf,
+					     enum cgroup_bpf_attach_type atype)
+{
+	return test_bit(atype, &cgrp_bpf->enabled_mask);
+}
+
+#define CGROUP_BPF_TYPE_ENABLED(sk, atype)				       \
+({									       \
+	struct cgroup *__cgrp = sock_cgroup_ptr(&(sk)->sk_cgrp_data);	       \
+									       \
+	__cgroup_bpf_type_enabled(&__cgrp->bpf, (atype));		       \
+})
I think it should directly test if the array is empty or not instead of
adding another bit.

Can the existing __cgroup_bpf_prog_array_is_empty(cgrp, ...) test be used instead?

That was the first idea, but it's still heavier than I'd wish. 0.3%-0.7%
in profiles, something similar in reqs/s. rcu_read_lock/unlock() pair is
cheap but anyway adds 2 barrier()s, and with bitmasks we can inline
the check.

--
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux