On Tue, Jul 31, 2012 at 9:07 AM, Rik van Riel <riel@xxxxxxxxxx> wrote: > On 07/31/2012 11:59 AM, Michal Hocko wrote: > >>> @@ -1899,6 +1907,11 @@ static void shrink_zone(struct zone *zone, struct >>> scan_control *sc) >>> } >>> memcg = mem_cgroup_iter(root, memcg,&reclaim); >>> } while (memcg); >>> + >>> + if (!over_softlimit) { >> >> >> Is this ever false? At least root cgroup is always above the limit. >> Shouldn't we rather compare reclaimed pages? > > > Uh oh. > > That could also result in us always reclaiming from the root cgroup > first... That is not true as far as I read. The mem_cgroup_reclaim_cookie remembers the last scanned memcg under the priority in iter->position, and the next round will just start at iter->position + 1. And that cookie is shared between different reclaim threads, so depending on how many threads entered reclaim and that starting point varies. By saying that, it is true though if there is one reclaiming thread where we always start from root and break when reading the end of the list. > > Is that really what we want? Don't see my patch change that part. The only difference is that I might end up scanning the same memcg list w/ the same priority twice. > > Having said that, in April I discussed an algorithm of LRU list > weighting with Ying and others that should work. Ying's patches > look like a good basis to implement that on top of... Yes. --Ying > > -- > All rights reversed -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>