On Sat, Oct 26, 2024 at 6:20 PM Barry Song <21cnbao@xxxxxxxxx> wrote: > > From: Barry Song <v-songbaohua@xxxxxxxx> > > When the proportion of folios from the zero map is small, missing their > accounting may not significantly impact profiling. However, it’s easy > to construct a scenario where this becomes an issue—for example, > allocating 1 GB of memory, writing zeros from userspace, followed by > MADV_PAGEOUT, and then swapping it back in. In this case, the swap-out > and swap-in counts seem to vanish into a black hole, potentially > causing semantic ambiguity. I agree. It also makes developing around this area more challenging. I'm working on the swap abstraction, and sometimes I can't tell if I screwed up somewhere, or if a proportion of these allocated entries go towards this optimization... Thanks for taking a stab at fixing this, Barry! > > We have two ways to address this: > > 1. Add a separate counter specifically for the zero map. > 2. Continue using the current accounting, treating the zero map like > a normal backend. (This aligns with the current behavior of zRAM > when supporting same-page fills at the device level.) Hmm, my understanding of the pswpout/pswpin counters is that they only apply to IO done directly to the backend device, no? That's why we have a separate set of counters for zswap, and do not count them towards pswp(in|out). For users who have swap files on physical disks, the performance difference between reading directly from the swapfile and going through these optimizations could be really large. I think it makes sense to have a separate set of counters for zero-mapped pages (ideally, both at the host level and at the cgroup level?)