On Wed, Mar 01, 2023 at 12:54:56PM +0900, Sergey Senozhatsky wrote: > On (23/02/28 14:20), Minchan Kim wrote: > > On Sun, Feb 26, 2023 at 12:55:45PM +0900, Sergey Senozhatsky wrote: > > > On (23/02/23 15:51), Minchan Kim wrote: > > > > On Thu, Feb 23, 2023 at 12:04:50PM +0900, Sergey Senozhatsky wrote: > > > > > Extend zsmalloc zs_pool_stats with a new member that > > > > > holds the number of objects pool compaction moved > > > > > between pool pages. > > > > > > > > I totally understand this new stat would be very useful for your > > > > development but not sure it's really useful for workload tune or > > > > monitoring. > > > > > > > > Unless we have strong usecase, I'd like to avoid new stat. > > > > > > The way I see is that it *can* give some interesting additional data to > > > periodical compaction (the one is not triggeed by the shrinker): if the > > > number of moves objects is relatively high but the number of comapcted > > > (feeed) pages is relatively low then the system has fragmentation in > > > small size classes (that tend to have many objects per zspage but not > > > too many pages per zspage) and in this case the interval between > > > periodical compactions probably can be increased. What do you think? > > > > In the case, how could we get only data triggered by periodical munual > > compaction? > > Something very simple like > > read zram mm_stat > trigger comapction > read zram mm_stat > > can work in most cases, I guess. There can be memory pressure > and shrinkers can compact the pool concurrently, in which case > mm_stat will include shrinker impact, but that's probably not > a problem. If system is under memory pressure then user space Agreed. > in general does not have to do comapction, since the kernel will > handle it. > > Just an idea. It feels like "pages compacted" on its own tells very > little, but I don't insist on exporting that new stat. I don't mind adding the simple metric but I want to add metric if we have real usecase with handful of comments how they uses it in real world. Thanks.