On 22.01.2019 23:09, Yang Shi wrote: > In current implementation, both kswapd and direct reclaim has to iterate > all mem cgroups. It is not a problem before offline mem cgroups could > be iterated. But, currently with iterating offline mem cgroups, it > could be very time consuming. In our workloads, we saw over 400K mem > cgroups accumulated in some cases, only a few hundred are online memcgs. > Although kswapd could help out to reduce the number of memcgs, direct > reclaim still get hit with iterating a number of offline memcgs in some > cases. We experienced the responsiveness problems due to this > occassionally. > > Here just break the iteration once it reclaims enough pages as what > memcg direct reclaim does. This may hurt the fairness among memcgs > since direct reclaim may awlays do reclaim from same memcgs. But, it > sounds ok since direct reclaim just tries to reclaim SWAP_CLUSTER_MAX > pages and memcgs can be protected by min/low. In case of we stop after SWAP_CLUSTER_MAX pages are reclaimed; it's possible the following situation. Memcgs, which are closest to root_mem_cgroup, will become empty, and you will have to iterate over empty memcg hierarchy long time, just to find a not empty memcg. I'd suggest, we should not lose fairness. We may introduce mem_cgroup::last_reclaim_child parameter to save a child (or its id), where the last reclaim was interrupted. Then next reclaim should start from this child: memcg = mem_cgroup_iter(root, find_child(root->last_reclaim_child), &reclaim); do { if ((!global_reclaim(sc) || !current_is_kswapd()) && sc->nr_reclaimed >= sc->nr_to_reclaim) { root->last_reclaim_child = memcg->id; mem_cgroup_iter_break(root, memcg); break; } Kirill > Cc: Johannes Weiner <hannes@xxxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxx> > Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> > --- > mm/vmscan.c | 7 +++---- > 1 file changed, 3 insertions(+), 4 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index a714c4f..ced5a16 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2764,16 +2764,15 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) > sc->nr_reclaimed - reclaimed); > > /* > - * Direct reclaim and kswapd have to scan all memory > - * cgroups to fulfill the overall scan target for the > - * node. > + * Kswapd have to scan all memory cgroups to fulfill > + * the overall scan target for the node. > * > * Limit reclaim, on the other hand, only cares about > * nr_to_reclaim pages to be reclaimed and it will > * retry with decreasing priority if one round over the > * whole hierarchy is not sufficient. > */ > - if (!global_reclaim(sc) && > + if ((!global_reclaim(sc) || !current_is_kswapd()) && > sc->nr_reclaimed >= sc->nr_to_reclaim) { > mem_cgroup_iter_break(root, memcg); > break; >