On Wed, 27 Apr 2011 10:33:43 -0700 Ying Han <yinghan@xxxxxxxxxx> wrote: > On Wed, Apr 27, 2011 at 12:51 AM, KAMEZAWA Hiroyuki > <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > > I changed the logic a little and add a filter for skipping nodes. > > With large NUMA, tasks may under cpuset or mempolicy and the usage of memory > > can be unbalanced. So, I think a filter is required. > > Thank you. > > > > > == > > Now, memory cgroup's direct reclaim frees memory from the current node. > > But this has some troubles. In usual, when a set of threads works in > > cooperative way, they are tend to on the same node. So, if they hit > > limits under memcg, it will reclaim memory from themselves, it may be > > active working set. > > > > For example, assume 2 node system which has Node 0 and Node 1 > > and a memcg which has 1G limit. After some work, file cacne remains and > > and usages are > >  Node 0: Â1M > >  Node 1: Â998M. > > > > and run an application on Node 0, it will eats its foot before freeing > > unnecessary file caches. > > > > This patch adds round-robin for NUMA and adds equal pressure to each > > node. When using cpuset's spread memory feature, this will work very well. > > > > > > From: Ying Han <yinghan@xxxxxxxxxx> > > Signed-off-by: Ying Han <yinghan@xxxxxxxxxx> > > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > > > > Changelog v1->v2: > > Â- fixed comments. > > Â- added a logic to avoid scanning unused node. > > > > --- > > Âinclude/linux/memcontrol.h |  Â1 > > Âmm/memcontrol.c      Â|  98 ++++++++++++++++++++++++++++++++++++++++++--- > > Âmm/vmscan.c        Â|  Â9 +++- > > Â3 files changed, 101 insertions(+), 7 deletions(-) > > > > Index: memcg/include/linux/memcontrol.h > > =================================================================== > > --- memcg.orig/include/linux/memcontrol.h > > +++ memcg/include/linux/memcontrol.h > > @@ -108,6 +108,7 @@ extern void mem_cgroup_end_migration(str > > Â*/ > > Âint mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg); > > Âint mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg); > > +int mem_cgroup_select_victim_node(struct mem_cgroup *memcg); > > Âunsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg, > >                    struct zone *zone, > >                    enum lru_list lru); > > Index: memcg/mm/memcontrol.c > > =================================================================== > > --- memcg.orig/mm/memcontrol.c > > +++ memcg/mm/memcontrol.c > > @@ -237,6 +237,11 @@ struct mem_cgroup { > >     * reclaimed from. > >     */ > >    Âint last_scanned_child; > > +    int last_scanned_node; > > +#if MAX_NUMNODES > 1 > > +    nodemask_t   Âscan_nodes; > > +    unsigned long  next_scan_node_update; > > +#endif > >    Â/* > >     * Should the accounting and control be hierarchical, per subtree? > >     */ > > @@ -650,18 +655,27 @@ static void mem_cgroup_soft_scan(struct > >    Âthis_cpu_add(mem->stat->events[MEM_CGROUP_EVENTS_SOFT_SCAN], val); > > Â} > > > > +static unsigned long > > +mem_cgroup_get_zonestat_node(struct mem_cgroup *mem, int nid, enum lru_list idx) > > +{ > > +    struct mem_cgroup_per_zone *mz; > > +    u64 total; > > +    int zid; > > + > > +    for (zid = 0; zid < MAX_NR_ZONES; zid++) { > > +        mz = mem_cgroup_zoneinfo(mem, nid, zid); > > +        total += MEM_CGROUP_ZSTAT(mz, idx); > > +    } > > +    return total; > > +} > > Âstatic unsigned long mem_cgroup_get_local_zonestat(struct mem_cgroup *mem, > >                    Âenum lru_list idx) > > Â{ > > -    int nid, zid; > > -    struct mem_cgroup_per_zone *mz; > > +    int nid; > >    Âu64 total = 0; > > > >    Âfor_each_online_node(nid) > > -        for (zid = 0; zid < MAX_NR_ZONES; zid++) { > > -            mz = mem_cgroup_zoneinfo(mem, nid, zid); > > -            total += MEM_CGROUP_ZSTAT(mz, idx); > > -        } > > +        total += mem_cgroup_get_zonestat_node(mem, nid, idx); > >    Âreturn total; > > Â} > > > > @@ -1471,6 +1485,77 @@ mem_cgroup_select_victim(struct mem_cgro > >    Âreturn ret; > > Â} > > > > +#if MAX_NUMNODES > 1 > > + > > +/* > > + * Update nodemask always is not very good. Even if we have empty > > + * list, or wrong list here, we can start from some node and traverse all nodes > > + * based on zonelist. So, update the list loosely once in 10 secs. > > + * > > + */ > > +static void mem_cgroup_may_update_nodemask(struct mem_cgroup *mem) > > +{ > > +    int nid; > > + > > +    if (time_after(mem->next_scan_node_update, jiffies)) > > +        return; > > + > > +    mem->next_scan_node_update = jiffies + 10*HZ; > > +    /* make a nodemask where this memcg uses memory from */ > > +    mem->scan_nodes = node_states[N_HIGH_MEMORY]; > > + > > +    for_each_node_mask(nid, node_states[N_HIGH_MEMORY]) { > > + > > +        if (mem_cgroup_get_zonestat_node(mem, nid, LRU_INACTIVE_FILE) || > > +          mem_cgroup_get_zonestat_node(mem, nid, LRU_ACTIVE_FILE)) > > +            continue; > > + > > +        if (total_swap_pages && > > +          (mem_cgroup_get_zonestat_node(mem, nid, LRU_INACTIVE_ANON) || > > +          Âmem_cgroup_get_zonestat_node(mem, nid, LRU_ACTIVE_ANON))) > > +            continue; > > +        node_clear(nid, mem->scan_nodes); > > +    } > > + > > +} > > + > > +/* > > + * Selecting a node where we start reclaim from. Because what we need is just > > + * reducing usage counter, start from anywhere is O,K. Considering > > + * memory reclaim from current node, there are pros. and cons. > > + * > > + * Freeing memory from current node means freeing memory from a node which > > + * we'll use or we've used. So, it may make LRU bad. And if several threads > > + * hit limits, it will see a contention on a node. But freeing from remote > > + * node means more costs for memory reclaim because of memory latency. > > + * > > + * Now, we use round-robin. Better algorithm is welcomed. > > + */ > > +int mem_cgroup_select_victim_node(struct mem_cgroup *mem) > > +{ > > +    int node; > > + > > +    mem_cgroup_may_update_nodemask(mem); > > +    node = mem->last_scanned_node; > > + > > +    node = next_node(node, mem->scan_nodes); > > +    if (node == MAX_NUMNODES) { > > +        node = first_node(mem->scan_nodes); > > +        if (unlikely(node == MAX_NUMNODES)) > > +            node = numa_node_id(); > not sure about this logic, is that possible we reclaim from a node > with all "unreclaimable" pages (based on the > mem_cgroup_may_update_nodemask check). > If i missed anything here, it would be helpful to add comment. > What I'm afraid here is when a user uses very small memcg, all pages on the LRU may be isolated or all usages are in per-cpu cache of memcg or because of task-migration between memcg, it hits limit before having any pages on LRU.....I think there is possible corner cases which can cause hang. ok, will add comment. Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>