On Thu, Oct 19, 2017 at 01:03:23PM -0700, Neha Agarwal wrote: > deferred_split_shrinker is NUMA aware. Making it memcg-aware if > CONFIG_MEMCG is enabled to prevent shrinking memory of memcg(s) that are > not under memory pressure. This change isolates memory pressure across > memcgs from deferred_split_shrinker perspective, by not prematurely > splitting huge pages for the memcg that is not under memory pressure. > > Note that a pte-mapped compound huge page charge is not moved to the dst > memcg on task migration. Look mem_cgroup_move_charge_pte_range() for > more information. Thus, mem_cgroup_move_account doesn't get called on > pte-mapped compound huge pages, hence we do not need to transfer the > page from source-memcg's split to destinations-memcg's split_queue. > > Tested: Ran two copies of a microbenchmark with partially unmapped > thp(s) in two separate memory cgroups. When first memory cgroup is put > under memory pressure, it's own thp(s) split. Other memcg's thp(s) > remain intact. > > Current implementation is not NUMA aware if MEMCG is compiled. If it is > important to have this shrinker both NUMA and MEMCG aware, I can work on > that. Some feedback on this front will be useful. I thin, this should be done. That's strange compromise -- memcg vs NUMA. And I think solving will help a lot with ifdefs. -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>