On Thu, Jan 15, 2015 at 03:48:38PM +0100, Michal Hocko wrote: > On Thu 15-01-15 16:25:16, Vladimir Davydov wrote: > > memcg = mem_cgroup_iter(root, NULL, &reclaim); > > do { > > [...] > > if (memcg && is_classzone) > > shrink_slab(sc->gfp_mask, zone_to_nid(zone), > > memcg, sc->nr_scanned - scanned, > > lru_pages); > > > > /* > > * Direct reclaim and kswapd have to scan all memory > > * cgroups to fulfill the overall scan target for the > > * zone. > > * > > * Limit reclaim, on the other hand, only cares about > > * nr_to_reclaim pages to be reclaimed and it will > > * retry with decreasing priority if one round over the > > * whole hierarchy is not sufficient. > > */ > > if (!global_reclaim(sc) && > > sc->nr_reclaimed >= sc->nr_to_reclaim) { > > mem_cgroup_iter_break(root, memcg); > > break; > > } > > memcg = mem_cgroup_iter(root, memcg, &reclaim); > > } while (memcg); > > > > > > If we can ignore reclaimed slab pages here (?), let's drop this patch. > > I see what you are trying to achieve but can this lead to a serious > over-reclaim? I think it can, but only if we shrink an inode with lots of pages attached to its address space (they also count to reclaim_state). In this case, we overreclaim anyway though. I agree that this is a high risk for a vague benefit. Let's drop it until we see this problem in real life. Thanks, Vladimir -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>