On Mon, 8 Jul 2024, Dan Schatzberg wrote: > On Sat, Jul 06, 2024 at 01:55:11PM -0700, David Rientjes wrote: > > Rather than hacky scripts that collect things like vmstat, memory.stat, > > buddyinfo, etc, at regular intervals, it would be preferable to hand off > > something more complete. Idea is an open source tool that can be run in > > the background to collect metrics for the system, NUMA nodes, and memcg > > hierarchies, as well as potentially from subsystems in the kernel like > > delay accounting. IOW, I want to be able to say "install ${tool} and send > > over the log file." > > > > Are thre any open source tools that do a good job of this today that I can > > latch onto? If not, sounds like I'll be writing one from scratch. Let me > > know if there's interest in this as well. > > > > Thanks! > > > > Hi David, > > At meta we have built and deployed Below[1] for this purpose. It's a > tool similar to `top` or others, but can record system state > periodically and allow for replaying. We run this on our production > fleet, periodically recording system state to the local disk. When we > need to debug a machine at a point in the past, we can log in and > replay the state. This uses a TUI (see the link for a demo) to make > navigating the data more natural. > > I'm aware of a few other organizations who have also deployed Below, > but tend to run it more in the manner you suggest - have it record > data but then use the snapshot command to export the state (e.g. as if > it was a log file) that can then be viewed off-host. Some > organizations eschew the TUI altogether and export the data to > Prometheus/Grafana. > > I'll caution though that having the data is one thing, being able to > interpret it is entirely different. While we try and put the most > useful and easily-understood metrics front-and-center in the TUI, > debugging an issue like you describe would probably require some > domain-expertise. > > [1] https://github.com/facebookincubator/below > Thanks Dan, this is fantastic! I've been playing with it locally. This does indeed appear to meet the exact needs of what I was referring to above, I'm excited that this already exists. Few questions for you: - Do you know of anybody who has deployed this in their guest when running on a public cloud? - Is there a motivation to add this to well known distros so it is "just there" and can run out of the box? There's some configuration and setup that it requires - How receptive are the maintainers to adding new data points, things like additional fields from vmstat, adding in /proc/pagetypeinfo, etc? - Any plans to support cgroup v1? :) Would that be nacked outright? Some customers still run this in their guest - For the "/usr/bin/below record --retain-for-s 604800 --compress" support, is there an appetite for separating this out into its own non-systemd managed process? IOW, the ability to tell the customer "go run 'mini-below' and send over the data" that *just* does the record operation and doesn't require installing/configuring anything? This could be potentially very exciting. Happy to take this discussion off-list as well: if anybody else from this thread (or not yet on this thread) is interested, please let me know so I include you. Thanks!