The zsmalloc has used bit_spin_lock to minimize space overhead since it's zpage granularity lock. However, it causes zsmalloc non-working under PREEMPT_RT as well as adding too much complication. This patchset tries to replace the bit_spin_lock with per-pool rwlock. It also removes unnecessary zspage isolation logic from class, which was the other part too much complication added into zsmalloc. Last patch changes the get_cpu_var to local_lock to make it work in PREEMPT_RT. Minchan Kim (7): zsmalloc: introduce some helper functions zsmalloc: rename zs_stat_type to class_stat_type zsmalloc: decouple class actions from zspage works zsmalloc: introduce obj_allocated zsmalloc: move huge compressed obj from page to zspage zsmalloc: remove zspage isolation for migration zsmalloc: replace per zpage lock with pool->migrate_lock Sebastian Andrzej Siewior (1): zsmalloc: replace get_cpu_var with local_lock mm/zsmalloc.c | 528 ++++++++++++++++++-------------------------------- 1 file changed, 188 insertions(+), 340 deletions(-) -- 2.34.0.rc1.387.gb447b232ab-goog