Re: [PATCH] selftests/bpf: simplify cgroup_hierarchical_stats selftest

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Hao,

Thanks for taking a look!

On Mon, Aug 29, 2022 at 1:08 PM Hao Luo <haoluo@xxxxxxxxxx> wrote:
>
> On Fri, Aug 26, 2022 at 4:06 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> >
> > The cgroup_hierarchical_stats selftest is complicated. It has to be,
> > because it tests an entire workflow of recording, aggregating, and
> > dumping cgroup stats. However, some of the complexity is unnecessary.
> > The test now enables the memory controller in a cgroup hierarchy, invokes
> > reclaim, measure reclaim time, THEN uses that reclaim time to test the
> > stats collection and aggregation. We don't need to use such a
> > complicated stat, as the context in which the stat is collected is
> > orthogonal.
> >
> > Simplify the test by using a simple stat instead of reclaim time, the
> > total number of times a process has ever entered a cgroup. This makes
> > the test simpler and removes the dependency on the memory controller and
> > the memory reclaim interface.
> >
> > Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> > ---
>
> Yosry, please tag the patch with the repo it should be applied on:
> bpf-next or bpf.
>

Will do for v2.

> >
> > When the test failed on Alexei's setup because the memory controller was
> > not enabled I realized this is an unnecessary dependency for the test,
> > which inspired this patch :) I am not sure if this prompt a Fixes tag as
> > the test wasn't broken.
> >
> > ---
> >  .../prog_tests/cgroup_hierarchical_stats.c    | 157 ++++++---------
> >  .../bpf/progs/cgroup_hierarchical_stats.c     | 181 ++++++------------
> >  2 files changed, 118 insertions(+), 220 deletions(-)
> >
> [...]
> > diff --git a/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c b/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c
> > index 8ab4253a1592..c74362854948 100644
> > --- a/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c
> > +++ b/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c
> > @@ -1,7 +1,5 @@
> >  // SPDX-License-Identifier: GPL-2.0-only
> >  /*
> > - * Functions to manage eBPF programs attached to cgroup subsystems
> > - *
>
> Please also add comments here explaining what the programs in this file do.
>

Will do.

> >   * Copyright 2022 Google LLC.
> >   */
> [...]
> >
> > -SEC("tp_btf/mm_vmscan_memcg_reclaim_begin")
> > -int BPF_PROG(vmscan_start, int order, gfp_t gfp_flags)
> > +SEC("fentry/cgroup_attach_task")
>
> Can we select an attachpoint that is more stable? It seems
> 'cgroup_attach_task' is an internal helper function in cgroup, and its
> signature can change. I'd prefer using those commonly used tracepoints
> and EXPORT'ed functions. IMHO their interfaces are more stable.
>

Will try to find a more stable attach point. Thanks!

> > +int BPF_PROG(counter, struct cgroup *dst_cgrp, struct task_struct *leader,
> > +            bool threadgroup)
> >  {
> > -       struct task_struct *task = bpf_get_current_task_btf();
> > -       __u64 *start_time_ptr;
> > -
> > -       start_time_ptr = bpf_task_storage_get(&vmscan_start_time, task, 0,
> > -                                             BPF_LOCAL_STORAGE_GET_F_CREATE);
> > -       if (start_time_ptr)
> > -               *start_time_ptr = bpf_ktime_get_ns();
> > -       return 0;
> > -}
> [...]
> >  }
> > --
> > 2.37.2.672.g94769d06f0-goog
> >



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux