On Wed, Sep 13, 2023 at 12:38 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: > > Stats flushing for memcg currently follows the following rules: > - Always flush the entire memcg hierarchy (i.e. flush the root). > - Only one flusher is allowed at a time. If someone else tries to flush > concurrently, they skip and return immediately. > - A periodic flusher flushes all the stats every 2 seconds. > > The reason this approach is followed is because all flushes are > serialized by a global rstat spinlock. On the memcg side, flushing is > invoked from userspace reads as well as in-kernel flushers (e.g. > reclaim, refault, etc). This approach aims to avoid serializing all > flushers on the global lock, which can cause a significant performance > hit under high concurrency. > > This approach has the following problems: > - Occasionally a userspace read of the stats of a non-root cgroup will > be too expensive as it has to flush the entire hierarchy [1]. This is a real world workload exhibiting the issue which is good. > - Sometimes the stats accuracy are compromised if there is an ongoing > flush, and we skip and return before the subtree of interest is > actually flushed. This is more visible when reading stats from > userspace, but can also affect in-kernel flushers. Please provide similar data/justification for the above. In addition: 1. How much delayed/stale stats have you observed on real world workload? 2. What is acceptable staleness in the stats for your use-case? 3. What is your use-case? 4. Does your use-case care about staleness of all the stats in memory.stat or some specific stats? 5. If some specific stats in memory.stat, does it make sense to decouple them from rstat and just pay the price up front to maintain them accurately? Most importantly please please please be concise in your responses. I know I am going back on some of the previous agreements but this whole locking back and forth has made in question the original motivation. thanks, Shakeel