On Thu, Apr 05, 2018 at 07:59:20PM +0100, Roman Gushchin wrote: > If memcg's usage is equal to the memory.low value, avoid reclaiming > from this cgroup while there is a surplus of reclaimable memory. > > This sounds more logical and also matches memory.high and memory.max > behavior: both are inclusive. I was trying to figure out why we did it this way in the first place and found this patch: commit 4e54dede38b45052a941bcf709f7d29f2e18174d Author: Michal Hocko <mhocko@xxxxxxx> Date: Fri Feb 27 15:51:46 2015 -0800 memcg: fix low limit calculation A memcg is considered low limited even when the current usage is equal to the low limit. This leads to interesting side effects e.g. groups/hierarchies with no memory accounted are considered protected and so the reclaim will emit MEMCG_LOW event when encountering them. Another and much bigger issue was reported by Joonsoo Kim. He has hit a NULL ptr dereference with the legacy cgroup API which even doesn't have low limit exposed. The limit is 0 by default but the initial check fails for memcg with 0 consumption and parent_mem_cgroup() would return NULL if use_hierarchy is 0 and so page_counter_read would try to dereference NULL. I suppose that the current implementation is just an overlook because the documentation in Documentation/cgroups/unified-hierarchy.txt says: "The memory.low boundary on the other hand is a top-down allocated reserve. A cgroup enjoys reclaim protection when it and all its ancestors are below their low boundaries" Fix the usage and the low limit comparision in mem_cgroup_low accordingly. > @@ -5709,7 +5709,7 @@ bool mem_cgroup_low(struct mem_cgroup *root, struct mem_cgroup *memcg) > elow = min(elow, parent_elow * low_usage / siblings_low_usage); > exit: > memcg->memory.elow = elow; > - return usage < elow; > + return usage <= elow; So I think this needs to be usage && usage <= elow to not emit MEMCG_LOW events in case usage == elow == 0. -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html