Hello, We're seeing CPU load issues with cgroup stats retrieval. I made a public gist with all the details, including the repro code (which unfortunately requires heavily loaded hardware) and some flamegraphs: * https://gist.github.com/bobrik/5ba58fb75a48620a1965026ad30a0a13 I'll repeat the gist of that gist here. Our repro has the following output after a warm-up run: completed: 5.17s [manual / mem-stat + cpu-stat] completed: 5.59s [manual / cpu-stat + mem-stat] completed: 0.52s [manual / mem-stat] completed: 0.04s [manual / cpu-stat] The first two lines do effectively the following: for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.stat /sys/fs/cgroup/system.slice/cpu.stat > /dev/null The latter two are the same thing, but via two loops: for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/cpu.stat > /dev/null; done for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.stat > /dev/null; done As you might've noticed from the output, splitting the loop into two makes the code run 10x faster. This isn't great, because most monitoring software likes to get all stats for one service before reading the stats for the next one, which maps to the slow and expensive way of doing this. We're running Linux v6.1 (the output is from v6.1.25) with no patches that touch the cgroup or mm subsystems, so you can assume vanilla kernel. >From the flamegraph it just looks like rstat flushing takes longer. I used the following flags on an AMD EPYC 7642 system (our usual pick cpu-clock was blaming spinlock irqrestore, which was questionable): perf -e cycles -g --call-graph fp -F 999 -- /tmp/repro Naturally, there are two questions that arise: * Is this expected (I guess not, but good to be sure)? * What can we do to make this better? I am happy to try out patches or to do some tracing to help understand this better.