On Mon, 25 Apr 2011 21:59:06 -0700 Ying Han <yinghan@xxxxxxxxxx> wrote: > On Mon, Apr 25, 2011 at 2:36 AM, KAMEZAWA Hiroyuki > <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > > Following patch will chagnge the logic. This is a core. > > == > > This is the main loop of per-memcg background reclaim which is implemented in > > function balance_mem_cgroup_pgdat(). > > > > The function performs a priority loop similar to global reclaim. During each > > iteration it frees memory from a selected victim node. > > After reclaiming enough pages or scanning enough pages, it returns and find > > next work with round-robin. > > > > changelog v8b..v7 > > 1. reworked for using work_queue rather than threads. > > 2. changed shrink_mem_cgroup algorithm to fit workqueue. In short, avoid > >  long running and allow quick round-robin and unnecessary write page. > >  When a thread make pages dirty continuously, write back them by flusher > >  is far faster than writeback by background reclaim. This detail will > >  be fixed when dirty_ratio implemented. The logic around this will be > >  revisited in following patche. > > > > Signed-off-by: Ying Han <yinghan@xxxxxxxxxx> > > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > > --- > > Âinclude/linux/memcontrol.h |  11 ++++ > > Âmm/memcontrol.c      Â|  44 ++++++++++++++--- > > Âmm/vmscan.c        Â| Â115 +++++++++++++++++++++++++++++++++++++++++++++ > > Â3 files changed, 162 insertions(+), 8 deletions(-) > > > > Index: memcg/include/linux/memcontrol.h > > =================================================================== > > --- memcg.orig/include/linux/memcontrol.h > > +++ memcg/include/linux/memcontrol.h > > @@ -89,6 +89,8 @@ extern int mem_cgroup_last_scanned_node( > > Âextern int mem_cgroup_select_victim_node(struct mem_cgroup *mem, > >                    Âconst nodemask_t *nodes); > > > > +unsigned long shrink_mem_cgroup(struct mem_cgroup *mem); > > + > > Âstatic inline > > Âint mm_match_cgroup(const struct mm_struct *mm, const struct mem_cgroup *cgroup) > > Â{ > > @@ -112,6 +114,9 @@ extern void mem_cgroup_end_migration(str > > Â*/ > > Âint mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg); > > Âint mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg); > > +unsigned int mem_cgroup_swappiness(struct mem_cgroup *memcg); > > +unsigned long mem_cgroup_zone_reclaimable_pages(struct mem_cgroup *memcg, > > +                int nid, int zone_idx); > > Âunsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg, > >                    struct zone *zone, > >                    enum lru_list lru); > > @@ -310,6 +315,12 @@ mem_cgroup_inactive_file_is_low(struct m > > Â} > > > > Âstatic inline unsigned long > > +mem_cgroup_zone_reclaimable_pages(struct mem_cgroup *memcg, int nid, int zone_idx) > > +{ > > +    return 0; > > +} > > + > > +static inline unsigned long > > Âmem_cgroup_zone_nr_pages(struct mem_cgroup *memcg, struct zone *zone, > >             enum lru_list lru) > > Â{ > > Index: memcg/mm/memcontrol.c > > =================================================================== > > --- memcg.orig/mm/memcontrol.c > > +++ memcg/mm/memcontrol.c > > @@ -1166,6 +1166,23 @@ int mem_cgroup_inactive_file_is_low(stru > >    Âreturn (active > inactive); > > Â} > > > > +unsigned long mem_cgroup_zone_reclaimable_pages(struct mem_cgroup *memcg, > > +                        int nid, int zone_idx) > > +{ > > +    int nr; > > +    struct mem_cgroup_per_zone *mz = > > +        mem_cgroup_zoneinfo(memcg, nid, zone_idx); > > + > > +    nr = MEM_CGROUP_ZSTAT(mz, NR_ACTIVE_FILE) + > > +      ÂMEM_CGROUP_ZSTAT(mz, NR_INACTIVE_FILE); > > + > > +    if (nr_swap_pages > 0) > > +        nr += MEM_CGROUP_ZSTAT(mz, NR_ACTIVE_ANON) + > > +           MEM_CGROUP_ZSTAT(mz, NR_INACTIVE_ANON); > > + > > +    return nr; > > +} > > + > > Âunsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg, > >                    struct zone *zone, > >                    enum lru_list lru) > > @@ -1286,7 +1303,7 @@ static unsigned long mem_cgroup_margin(s > >    Âreturn margin >> PAGE_SHIFT; > > Â} > > > > -static unsigned int get_swappiness(struct mem_cgroup *memcg) > > +unsigned int mem_cgroup_swappiness(struct mem_cgroup *memcg) > > Â{ > >    Âstruct cgroup *cgrp = memcg->css.cgroup; > > > > @@ -1595,14 +1612,15 @@ static int mem_cgroup_hierarchical_recla > >        Â/* we use swappiness of local cgroup */ > >        Âif (check_soft) { > >            Âret = mem_cgroup_shrink_node_zone(victim, gfp_mask, > > -                noswap, get_swappiness(victim), zone, > > +                noswap, mem_cgroup_swappiness(victim), zone, > >                Â&nr_scanned); > >            Â*total_scanned += nr_scanned; > >            Âmem_cgroup_soft_steal(victim, ret); > >            Âmem_cgroup_soft_scan(victim, nr_scanned); > >        Â} else > >            Âret = try_to_free_mem_cgroup_pages(victim, gfp_mask, > > -                        noswap, get_swappiness(victim)); > > +                        noswap, > > +                        mem_cgroup_swappiness(victim)); > >        Âcss_put(&victim->css); > >        Â/* > >         * At shrinking usage, we can't check we should stop here or > > @@ -1628,15 +1646,25 @@ static int mem_cgroup_hierarchical_recla > > Âint > > Âmem_cgroup_select_victim_node(struct mem_cgroup *mem, const nodemask_t *nodes) > > Â{ > > -    int next_nid; > > +    int next_nid, i; > >    Âint last_scanned; > > > >    Âlast_scanned = mem->last_scanned_node; > > -    next_nid = next_node(last_scanned, *nodes); > > +    next_nid = last_scanned; > > +rescan: > > +    next_nid = next_node(next_nid, *nodes); > > > >    Âif (next_nid == MAX_NUMNODES) > >        Ânext_nid = first_node(*nodes); > > > > +    /* If no page on this node, skip */ > > +    for (i = 0; i < MAX_NR_ZONES; i++) > > +        if (mem_cgroup_zone_reclaimable_pages(mem, next_nid, i)) > > +            break; > > + > > +    if (next_nid != last_scanned && (i == MAX_NR_ZONES)) > > +        goto rescan; > > + > >    Âmem->last_scanned_node = next_nid; > > > >    Âreturn next_nid; > > @@ -3649,7 +3677,7 @@ try_to_free: > >            Âgoto out; > >        Â} > >        Âprogress = try_to_free_mem_cgroup_pages(mem, GFP_KERNEL, > > -                        false, get_swappiness(mem)); > > +                    false, mem_cgroup_swappiness(mem)); > >        Âif (!progress) { > >            Ânr_retries--; > >            Â/* maybe some writeback is necessary */ > > @@ -4073,7 +4101,7 @@ static u64 mem_cgroup_swappiness_read(st > > Â{ > >    Âstruct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp); > > > > -    return get_swappiness(memcg); > > +    return mem_cgroup_swappiness(memcg); > > Â} > > > > Âstatic int mem_cgroup_swappiness_write(struct cgroup *cgrp, struct cftype *cft, > > @@ -4849,7 +4877,7 @@ mem_cgroup_create(struct cgroup_subsys * > >    ÂINIT_LIST_HEAD(&mem->oom_notify); > > > >    Âif (parent) > > -        mem->swappiness = get_swappiness(parent); > > +        mem->swappiness = mem_cgroup_swappiness(parent); > >    Âatomic_set(&mem->refcnt, 1); > >    Âmem->move_charge_at_immigrate = 0; > >    Âmutex_init(&mem->thresholds_lock); > > Index: memcg/mm/vmscan.c > > =================================================================== > > --- memcg.orig/mm/vmscan.c > > +++ memcg/mm/vmscan.c > > @@ -42,6 +42,7 @@ > > Â#include <linux/delayacct.h> > > Â#include <linux/sysctl.h> > > Â#include <linux/oom.h> > > +#include <linux/res_counter.h> > > > > Â#include <asm/tlbflush.h> > > Â#include <asm/div64.h> > > @@ -2308,6 +2309,120 @@ static bool sleeping_prematurely(pg_data > >        Âreturn !all_zones_ok; > > Â} > > > > +#ifdef CONFIG_CGROUP_MEM_RES_CTLR > > +/* > > + * The function is used for per-memcg LRU. It scanns all the zones of the > > + * node and returns the nr_scanned and nr_reclaimed. > > + */ > > +/* > > + * Limit of scanning per iteration. For round-robin. > > + */ > > +#define MEMCG_BGSCAN_LIMIT   (2048) > > + > > +static void > > +shrink_memcg_node(int nid, int priority, struct scan_control *sc) > > +{ > > +    unsigned long total_scanned = 0; > > +    struct mem_cgroup *mem_cont = sc->mem_cgroup; > > +    int i; > > + > > +    /* > > +    Â* This dma->highmem order is consistant with global reclaim. > > +    Â* We do this because the page allocator works in the opposite > > +    Â* direction although memcg user pages are mostly allocated at > > +    Â* highmem. > > +    Â*/ > > +    for (i = 0; > > +      Â(i < NODE_DATA(nid)->nr_zones) && > > +      Â(total_scanned < MEMCG_BGSCAN_LIMIT); > > +      Âi++) { > > +        struct zone *zone = NODE_DATA(nid)->node_zones + i; > > +        struct zone_reclaim_stat *zrs; > > +        unsigned long scan, rotate; > > + > > +        if (!populated_zone(zone)) > > +            continue; > > +        scan = mem_cgroup_zone_reclaimable_pages(mem_cont, nid, i); > > +        if (!scan) > > +            continue; > > +        /* If recent memory reclaim on this zone doesn't get good */ > > +        zrs = get_reclaim_stat(zone, sc); > > +        scan = zrs->recent_scanned[0] + zrs->recent_scanned[1]; > > +        rotate = zrs->recent_rotated[0] + zrs->recent_rotated[1]; > > + > > +        if (rotate > scan/2) > > +            sc->may_writepage = 1; > > + > > +        sc->nr_scanned = 0; > > +        shrink_zone(priority, zone, sc); > > +        total_scanned += sc->nr_scanned; > > +        sc->may_writepage = 0; > > +    } > > +    sc->nr_scanned = total_scanned; > > +} > > I see the MEMCG_BGSCAN_LIMIT is a newly defined macro from previous > post. So, now the number of pages to scan is capped on 2k for each > memcg, and does it make difference on big vs small cgroup? > Now, no difference. One reason is because low_watermark - high_watermark is limited to 4MB, at most. It should be static 4MB in many cases and 2048 pages is for scanning 8MB, twice of low_wmark - high_wmark. Another reason is that I didn't have enough time for considering to tune this. By MEMCG_BGSCAN_LIMIT, round-robin can be simply fair and I think it's a good start point. If memory eater enough slow (because the threads needs to do some work on allocated memory), this shrink_mem_cgroup() works fine and helps to avoid hitting limit. Here, the amount of dirty pages is troublesome. The penaly for cpu eating (hard-to-reclaim) cgroup is given by 'delay'. (see patch 7.) This patch's congestion_wait is too bad and will be replaced in patch 7 as 'delay'. In short, if memcg scanning seems to be not successful, it gets HZ/10 delay until the next work. If we have dirty_ratio + I/O less dirty throttling, I think we'll see much better fairness on this watermark reclaim round robin. Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>