The function memcg_check_events() is called to trigger possible event notifications or soft limit updates when page event "clock" moves sufficiently. This tracking is not needed when neither soft limit nor (v1) event notifications are configured. The tracking can catch-up with the clock at any time upon thresholds configuration. Guard this functionality behind an unlikely static branch (soft limit and events are presumably rather unused than used). This has slight insignificant performance gain in page-fault specific benchmark but overall no performance impact is expected. The goal is to partition the charging code per provided user functionality. Suggested-by: Michal Hocko <mhocko@xxxxxxxx> Signed-off-by: Michal Koutný <mkoutny@xxxxxxxx> --- mm/memcontrol.c | 8 ++++++++ 1 file changed, 8 insertions(+) On Fri, Jan 14, 2022 at 10:09:35AM +0100, Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> wrote: > So avoiding these two also avoids memcg_check_events()? I've made the matter explicit with the surrounding patch. [ The performance "gain" is negligible (differences of pft [1] are dominated by non-root memcg classification): nocg, nopatch cg, nopatch nocg, patch cg, patch Hmean faults/sec-2 273366.6312 ( 0.00%) 243573.3767 * -10.90%* 273901.9709 * 0.20%* 247702.4104 * -9.39%* CoeffVar faults/sec-2 3.8771 ( 0.00%) 3.8396 ( 0.97%) 3.1400 ( 19.01%) 4.1188 ( -6.24%) cg, nopatch cg, patch Hmean faults/sec-2 243573.3767 ( 0.00%) 247702.4104 * 1.70%* CoeffVar faults/sec-2 3.8396 ( 0.00%) 4.1188 ( -7.27%) On less targetted benchmarks it's well below noise. ] I think it would make sense inserting the patch into your series and subsequently reject enabling on PREEMPT_RT -- provided this patch makes sense to others too -- the justification is rather functionality splitting for this PREEMPT_RT effort. > Are there plans to remove v1 or is this part of "we must not break > userland"? It's part of that mantra, so v1 can't be simply removed. OTOH, my sensing is that this change also fits under not extending v1 (to avoid doubling effort on everything). Michal [1] https://github.com/gormanm/mmtests/blob/6bcb8b301a48386e0cc63a21f7642048a3ceaed5/configs/config-pagealloc-performance#L6 diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4a7b3ebf8e48..7f64ce33d137 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -106,6 +106,8 @@ static bool do_memsw_account(void) #define THRESHOLDS_EVENTS_TARGET 128 #define SOFTLIMIT_EVENTS_TARGET 1024 +DEFINE_STATIC_KEY_FALSE(memcg_v1_events_enabled_key); + /* * Cgroups above their limits are maintained in a RB-Tree, independent of * their hierarchy representation @@ -852,6 +854,9 @@ static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg, */ static void memcg_check_events(struct mem_cgroup *memcg, int nid) { + if (!static_branch_unlikely(&memcg_v1_events_enabled_key)) + return; + /* threshold event is triggered in finer grain than soft limit */ if (unlikely(mem_cgroup_event_ratelimit(memcg, MEM_CGROUP_TARGET_THRESH))) { @@ -3757,6 +3762,7 @@ static ssize_t mem_cgroup_write(struct kernfs_open_file *of, break; case RES_SOFT_LIMIT: memcg->soft_limit = nr_pages; + static_branch_enable(&memcg_v1_events_enabled_key); ret = 0; break; } @@ -4831,6 +4837,8 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of, list_add(&event->list, &memcg->event_list); spin_unlock_irq(&memcg->event_list_lock); + static_branch_enable(&memcg_v1_events_enabled_key); + fdput(cfile); fdput(efile); -- 2.34.1