On Wed, Aug 14, 2024 at 04:03:13PM GMT, Nhat Pham wrote: > On Wed, Aug 14, 2024 at 9:32 AM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote: > > > > > > Ccing Nhat > > > > On Wed, Aug 14, 2024 at 02:57:38PM GMT, Jesper Dangaard Brouer wrote: > > > I suspect the next whac-a-mole will be the rstat flush for the slab code > > > that kswapd also activates via shrink_slab, that via > > > shrinker->count_objects() invoke count_shadow_nodes(). > > > > > > > Actually count_shadow_nodes() is already using ratelimited version. > > However zswap_shrinker_count() is still using the sync version. Nhat is > > modifying this code at the moment and we can ask if we really need most > > accurate values for MEMCG_ZSWAP_B and MEMCG_ZSWAPPED for the zswap > > writeback heuristic. > > You are referring to this, correct: > > mem_cgroup_flush_stats(memcg); > nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT; > nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED); > > It's already a bit less-than-accurate - as you pointed out in another > discussion, it takes into account the objects and sizes of the entire > subtree, rather than just the ones charged to the current (memcg, > node) combo. Feel free to optimize this away! > > In fact, I should probably replace this with another (atomic?) counter > in zswap_lruvec_state struct, which tracks the post-compression size. > That way, we'll have a better estimate of the compression factor - > total post-compression size / (length of LRU * page size), and > perhaps avoid the whole stat flushing path altogether... > That sounds like much better solution than relying on rstat for accurate stats.