Re: [PATCH 0/4 v2] cgroup: separate rstat trees

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello JP.

On Thu, Feb 27, 2025 at 01:55:39PM -0800, inwardvessel <inwardvessel@xxxxxxxxx> wrote:
> From: JP Kobryn <inwardvessel@xxxxxxxxx>
> 
> The current design of rstat takes the approach that if one subsystem is
> to be flushed, all other subsystems with pending updates should also be
> flushed. It seems that over time, the stat-keeping of some subsystems
> has grown in size to the extent that they are noticeably slowing down
> others. This has been most observable in situations where the memory
> controller is enabled. One big area where the issue comes up is system
> telemetry, where programs periodically sample cpu stats. It would be a
> benefit for programs like this if the overhead of having to flush memory
> stats (and others) could be eliminated. It would save cpu cycles for
> existing cpu-based telemetry programs and improve scalability in terms
> of sampling frequency and volume of hosts.
 
> This series changes the approach of "flush all subsystems" to "flush
> only the requested subsystem".
...

> before:
> sizeof(struct cgroup_rstat_cpu) =~ 176 bytes /* can vary based on config */
> 
> nr_cgroups * sizeof(struct cgroup_rstat_cpu)
> nr_cgroups * 176 bytes
> 
> after:
...
> nr_cgroups * (176 + 16 * 2)
> nr_cgroups * 208 bytes
 
~ 32B/cgroup/cpu

> With regard to validation, there is a measurable benefit when reading
> stats with this series. A test program was made to loop 1M times while
> reading all four of the files cgroup.stat, cpu.stat, io.stat,
> memory.stat of a given parent cgroup each iteration. This test program
> has been run in the experiments that follow.

Thanks for looking into this and running experiments on the behavior of
split rstat trees.

> The first experiment consisted of a parent cgroup with memory.swap.max=0
> and memory.max=1G. On a 52-cpu machine, 26 child cgroups were created
> and within each child cgroup a process was spawned to frequently update
> the memory cgroup stats by creating and then reading a file of size 1T
> (encouraging reclaim). The test program was run alongside these 26 tasks
> in parallel. The results showed a benefit in both time elapsed and perf
> data of the test program.
> 
> time before:
> real    0m44.612s
> user    0m0.567s
> sys     0m43.887s
> 
> perf before:
> 27.02% mem_cgroup_css_rstat_flush
>  6.35% __blkcg_rstat_flush
>  0.06% cgroup_base_stat_cputime_show
> 
> time after:
> real    0m27.125s
> user    0m0.544s
> sys     0m26.491s

So this shows that flushing rstat trees one by one (as the test program
reads *.stat) is quicker than flushing all at once (+idle reads of
*.stat).
Interesting, I'd not bet on that at first but that is convincing to
favor the separate trees approach.

> perf after:
> 6.03% mem_cgroup_css_rstat_flush
> 0.37% blkcg_print_stat
> 0.11% cgroup_base_stat_cputime_show

I'd understand why the series reduces time spent in
mem_cgroup_flush_stats() but what does the lower proportion of
mem_cgroup_css_rstat_flush() show?


> Another experiment was setup on the same host using a parent cgroup with
> two child cgroups. The same swap and memory max were used as the
> previous experiment. In the two child cgroups, kernel builds were done
> in parallel, each using "-j 20". The perf comparison of the test program
> was very similar to the values in the previous experiment. The time
> comparison is shown below.
> 
> before:
> real    1m2.077s
> user    0m0.784s
> sys     1m0.895s

This is 1M loops of stats reading program like before? I.e. if this
should be analogous to 0m44.612s above why isn't it same? (I'm thinking
of more frequent updates in the latter test.)

> after:
> real    0m32.216s
> user    0m0.709s
> sys     0m31.256s

What was impact on the kernel build workloads (cgroup_rstat_updated)?

(Perhaps the saved 30s of CPU work (if potentially moved from readers to
writers) would be spread too thin in all of two 20-parallel kernel
builds, right?)

...
> For the final experiment, perf events were recorded during a kernel
> build with the same host and cgroup setup. The builds took place in the
> child node. Control and experimental sides both showed similar in cycles
> spent on cgroup_rstat_updated() and appeard insignificant compared among
> the events recorded with the workload.

What's the change between control vs experiment? Runnning in root cg vs
nested? Or running without *.stat readers vs with them against the
kernel build?
(This clarification would likely answer my question above.)


Michal

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux