On Thu, Nov 12, 2015 at 06:41:33PM -0500, Johannes Weiner wrote: > Let the networking stack know when a memcg is under reclaim pressure > so that it can clamp its transmit windows accordingly. > > Whenever the reclaim efficiency of a cgroup's LRU lists drops low > enough for a MEDIUM or HIGH vmpressure event to occur, assert a > pressure state in the socket and tcp memory code that tells it to curb > consumption growth from sockets associated with said control group. > > vmpressure events are naturally edge triggered, so for hysteresis > assert socket pressure for a second to allow for subsequent vmpressure > events to occur before letting the socket code return to normal. AFAICS, in contrast to v1, now you don't modify vmpressure behavior, which means socket_pressure will only be set when cgroup hits its high/hard limit. On tightly packed system, this is unlikely IMO - cgroups will mostly experience pressure due to memory shortage at the global level and/or their low limit configuration, in which case no vmpressure events will be triggered and therefore tcp window won't be clamped accordingly. May be, we could use a per memcg slab shrinker to detect memory pressure? This looks like abusing shrinkers API though. Thanks, Vladimir -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>