Hi, Existing zsmalloc page fullness grouping leads to suboptimal page selection for both zs_malloc() and zs_compact(). This patchset reworks zsmalloc fullness grouping/classification. Additinally it also implements new compaction algorithm that is expected to use less CPU-cycles (as it potentially does fewer memcpy-s in zs_object_copy()). Test (synthetic) results can be seen in patch 0003. v4: -- fixed classes stats loop bug (Yosry) -- fixed spelling errors (Andrew) -- dropped some unnecessary hunks from the patches v3: -- reworked compaction algorithm implementation (Minchan) -- keep existing stats and fullness enums (Minchan, Yosry) -- dropped the patch with new zsmalloc compaction stats (Minchan) -- report per inuse ratio group classes stats Sergey Senozhatsky (4): zsmalloc: remove insert_zspage() ->inuse optimization zsmalloc: fine-grained inuse ratio based fullness grouping zsmalloc: rework compaction algorithm zsmalloc: show per fullness group class stats mm/zsmalloc.c | 358 ++++++++++++++++++++++++-------------------------- 1 file changed, 173 insertions(+), 185 deletions(-) -- 2.40.0.rc0.216.gc4246ad0f0-goog