On Mon, May 16, 2022 at 7:08 PM Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote: > > On Sun, May 15, 2022 at 02:34:59AM +0000, Yosry Ahmed wrote: > > + > > +void bpf_rstat_flush(struct cgroup *cgrp, int cpu) > > +{ > > + struct bpf_rstat_flusher *flusher; > > + struct bpf_rstat_flush_ctx ctx = { > > + .cgrp = cgrp, > > + .parent = cgroup_parent(cgrp), > > + .cpu = cpu, > > + }; > > + > > + rcu_read_lock(); > > + migrate_disable(); > > + spin_lock(&bpf_rstat_flushers_lock); > > + > > + list_for_each_entry(flusher, &bpf_rstat_flushers, list) > > + (void) bpf_prog_run(flusher->prog, &ctx); > > + > > + spin_unlock(&bpf_rstat_flushers_lock); > > + migrate_enable(); > > + rcu_read_unlock(); > > +} > > diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c > > index 24b5c2ab5598..0285d496e807 100644 > > --- a/kernel/cgroup/rstat.c > > +++ b/kernel/cgroup/rstat.c > > @@ -2,6 +2,7 @@ > > #include "cgroup-internal.h" > > > > #include <linux/sched/cputime.h> > > +#include <linux/bpf-rstat.h> > > > > static DEFINE_SPINLOCK(cgroup_rstat_lock); > > static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock); > > @@ -168,6 +169,7 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep) > > struct cgroup_subsys_state *css; > > > > cgroup_base_stat_flush(pos, cpu); > > + bpf_rstat_flush(pos, cpu); > > Please use the following approach instead: > > __weak noinline void bpf_rstat_flush(struct cgroup *cgrp, struct cgroup *parent, int cpu) > { > } > > and change above line to: > bpf_rstat_flush(pos, cgroup_parent(pos), cpu); > > Then tracing bpf fentry progs will be able to attach to bpf_rstat_flush. > Pretty much the patches 1, 2, 3 are not necessary. > In patch 4 add bpf_cgroup_rstat_updated/flush as two kfuncs instead of stable helpers. > > This way patches 1,2,3,4 will become 2 trivial patches and we will be > able to extend the interface between cgroup rstat and bpf whenever we need > without worrying about uapi stability. > > We had similar discusison with HID subsystem that plans to use bpf in HID > with the same approach. > See this patch set: > https://lore.kernel.org/bpf/20220421140740.459558-2-benjamin.tissoires@xxxxxxxxxx/ > You'd need patch 1 from it to enable kfuncs for tracing. > > Your patch 5 is needed as-is. > Yonghong, > please review it. > Different approach for patch 1-4 won't affect patch 5. > Patches 6 and 7 look good. > > With this approach that patch 7 will mostly stay as-is. Instead of: > +SEC("rstat/flush") > +int vmscan_flush(struct bpf_rstat_flush_ctx *ctx) > +{ > + struct vmscan_percpu *pcpu_stat; > + struct vmscan *total_stat, *parent_stat; > + struct cgroup *cgrp = ctx->cgrp, *parent = ctx->parent; > > it will become > > SEC("fentry/bpf_rstat_flush") > int BPF_PROG(vmscan_flush, struct cgroup *cgrp, struct cgroup *parent, int cpu) Thanks so much for taking the time to look into this. Indeed, this approach looks cleaner and simpler. I will incorporate that into a V1 and send it. Thanks!