On Thu, Apr 27, 2023 at 4:48 PM Namhyung Kim <namhyung@xxxxxxxxxx> wrote: > > It seems BPF CO-RE reloc doesn't work well with the pattern that gets > the field-offset only. Use offsetof() to make it explicit so that > the compiler would generate the correct code. > > Fixes: 0c1228486bef ("perf lock contention: Support pre-5.14 kernels") > Co-developed-by: Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> > Signed-off-by: Namhyung Kim <namhyung@xxxxxxxxxx> Acked-by: Ian Rogers <irogers@xxxxxxxxxx> Thanks, Ian > --- > tools/perf/util/bpf_skel/lock_contention.bpf.c | 14 +++++++------- > 1 file changed, 7 insertions(+), 7 deletions(-) > > diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c > index 30c193078bdb..8d3cfbb3cc65 100644 > --- a/tools/perf/util/bpf_skel/lock_contention.bpf.c > +++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c > @@ -429,21 +429,21 @@ struct rq___new { > SEC("raw_tp/bpf_test_finish") > int BPF_PROG(collect_lock_syms) > { > - __u64 lock_addr; > + __u64 lock_addr, lock_off; > __u32 lock_flag; > > + if (bpf_core_field_exists(struct rq___new, __lock)) > + lock_off = offsetof(struct rq___new, __lock); > + else > + lock_off = offsetof(struct rq___old, lock); > + > for (int i = 0; i < MAX_CPUS; i++) { > struct rq *rq = bpf_per_cpu_ptr(&runqueues, i); > - struct rq___new *rq_new = (void *)rq; > - struct rq___old *rq_old = (void *)rq; > > if (rq == NULL) > break; > > - if (bpf_core_field_exists(rq_new->__lock)) > - lock_addr = (__u64)&rq_new->__lock; > - else > - lock_addr = (__u64)&rq_old->lock; > + lock_addr = (__u64)(void *)rq + lock_off; > lock_flag = LOCK_CLASS_RQLOCK; > bpf_map_update_elem(&lock_syms, &lock_addr, &lock_flag, BPF_ANY); > } > -- > 2.40.1.495.gc816e09b53d-goog >