[PATCHv3 0/4] zsmalloc: fine-grained fullness and new compaction algorithm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



	Hi,

Existing zsmalloc page fullness grouping leads to suboptimal page
selection for both zs_malloc() and zs_compact(). This patchset
reworks zsmalloc fullness grouping/classification.

Additinally it also implements new compaction algorithm that is
expected to use less CPU-cycles (as it potentially does fewer
memcpy-s in zs_object_copy()).

Test (synthetic) results can be seen in patch 0003.

v3:
-- reworked compaction algorithm implementation (Minchan)
-- keep existing stats and fullness enums (Minchan, Yosry)
-- dropped the patch with new zsmalloc compaction stats (Minchan)
-- report per inuse ratio group classes stats

Sergey Senozhatsky (4):
  zsmalloc: remove insert_zspage() ->inuse optimization
  zsmalloc: fine-grained inuse ratio based fullness grouping
  zsmalloc: rework compaction algorithm
  zsmalloc: show per fullness group class stats

 mm/zsmalloc.c | 362 ++++++++++++++++++++++++--------------------------
 1 file changed, 175 insertions(+), 187 deletions(-)

-- 
2.40.0.rc0.216.gc4246ad0f0-goog





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux