On 11/16/2012 09:07 AM, Kamezawa Hiroyuki wrote: > (2012/11/15 22:47), Glauber Costa wrote: >> On 11/15/2012 01:41 PM, Kamezawa Hiroyuki wrote: >>> (2012/11/15 11:54), Glauber Costa wrote: >>>> The idea is to synchronously do it, leaving it up to the shrinking >>>> facilities in vmscan.c and/or others. Not actively retrying shrinking >>>> may leave the caches alive for more time, but it will remove the ugly >>>> wakeups. One would argue that if the caches have free objects but are >>>> not being shrunk, it is because we don't need that memory yet. >>>> >>>> Signed-off-by: Glauber Costa <glommer@xxxxxxxxxxxxx> >>>> CC: Michal Hocko <mhocko@xxxxxxx> >>>> CC: Kamezawa Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> >>>> CC: Johannes Weiner <hannes@xxxxxxxxxxx> >>>> CC: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> >>> >>> I agree this patch but can we have a way to see the number of unaccounted >>> zombie cache usage for debugging ? >>> >>> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> >>> >> Any particular interface in mind ? >> > > Hmm, it's debug interface and having cgroup file may be bad..... > If it can be seen in bytes or some, /proc/vmstat ? > > out_of_track_slabs xxxxxxx. hm ? > I particularly think that, being this a debug interface, it is also useful to have an indication of which caches are still in place. This is because the cache itself, is the best indication we have about the specific workload that may be keeping it in memory. I first thought debugfs could help us probing useful information out of it, but given all the abuse people inflicted in debugfs... maybe we could have a file in the root memcg with that information for all removed memcgs? If we do that, we can go further and list the memcgs that are pending due to memsw as well. memory.dangling_memcgs ? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>