Re: [PATCH V4 06/10] Per-memcg background reclaim.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, Apr 14, 2011 at 6:11 PM, KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
On Thu, 14 Apr 2011 15:54:25 -0700
Ying Han <yinghan@xxxxxxxxxx> wrote:

> This is the main loop of per-memcg background reclaim which is implemented in
> function balance_mem_cgroup_pgdat().
>
> The function performs a priority loop similar to global reclaim. During each
> iteration it invokes balance_pgdat_node() for all nodes on the system, which
> is another new function performs background reclaim per node. After reclaiming
> each node, it checks mem_cgroup_watermark_ok() and breaks the priority loop if
> it returns true.
>
> changelog v4..v3:
> 1. split the select_victim_node and zone_unreclaimable to a seperate patches
> 2. remove the logic tries to do zone balancing.
>
> changelog v3..v2:
> 1. change mz->all_unreclaimable to be boolean.
> 2. define ZONE_RECLAIMABLE_RATE macro shared by zone and per-memcg reclaim.
> 3. some more clean-up.
>
> changelog v2..v1:
> 1. move the per-memcg per-zone clear_unreclaimable into uncharge stage.
> 2. shared the kswapd_run/kswapd_stop for per-memcg and global background
> reclaim.
> 3. name the per-memcg memcg as "memcg-id" (css->id). And the global kswapd
> keeps the same name.
> 4. fix a race on kswapd_stop while the per-memcg-per-zone info could be accessed
> after freeing.
> 5. add the fairness in zonelist where memcg remember the last zone reclaimed
> from.
>
> Signed-off-by: Ying Han <yinghan@xxxxxxxxxx>
> ---
>  mm/vmscan.c |  161 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 161 insertions(+), 0 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 4deb9c8..b8345d2 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -47,6 +47,8 @@
>
>  #include <linux/swapops.h>
>
> +#include <linux/res_counter.h>
> +
>  #include "internal.h"
>
>  #define CREATE_TRACE_POINTS
> @@ -111,6 +113,8 @@ struct scan_control {
>        * are scanned.
>        */
>       nodemask_t      *nodemask;
> +
> +     int priority;
>  };
>
>  #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
> @@ -2632,11 +2636,168 @@ static void kswapd_try_to_sleep(struct kswapd *kswapd_p, int order,
>       finish_wait(wait_h, &wait);
>  }
>
> +#ifdef CONFIG_CGROUP_MEM_RES_CTLR
> +/*
> + * The function is used for per-memcg LRU. It scanns all the zones of the
> + * node and returns the nr_scanned and nr_reclaimed.
> + */
> +static void balance_pgdat_node(pg_data_t *pgdat, int order,
> +                                     struct scan_control *sc)
> +{
> +     int i;
> +     unsigned long total_scanned = 0;
> +     struct mem_cgroup *mem_cont = sc->mem_cgroup;
> +     int priority = sc->priority;
> +
> +     /*
> +      * Now scan the zone in the dma->highmem direction, and we scan
> +      * every zones for each node.
> +      *
> +      * We do this because the page allocator works in the opposite
> +      * direction.  This prevents the page allocator from allocating
> +      * pages behind kswapd's direction of progress, which would
> +      * cause too much scanning of the lower zones.
> +      */

I guess this comment is a cut-n-paste from global kswapd. It works when
alloc_page() stalls....hmm, I'd like to think whether dma->highmem direction
is good in this case.

As you know, memcg works against user's memory, memory should be in highmem zone.
Memcg-kswapd is not for memory-shortage, but for voluntary page dropping by
_user_.

If this memcg-kswapd drops pages from lower zones first, ah, ok, it's good for
the system because memcg's pages should be on higher zone if we have free memory.

So, I think the reason for dma->highmem is different from global kswapd.




> +     for (i = 0; i < pgdat->nr_zones; i++) {
> +             struct zone *zone = pgdat->node_zones + i;
> +
> +             if (!populated_zone(zone))
> +                     continue;
> +
> +             sc->nr_scanned = 0;
> +             shrink_zone(priority, zone, sc);
> +             total_scanned += sc->nr_scanned;
> +
> +             /*
> +              * If we've done a decent amount of scanning and
> +              * the reclaim ratio is low, start doing writepage
> +              * even in laptop mode
> +              */
> +             if (total_scanned > SWAP_CLUSTER_MAX * 2 &&
> +                 total_scanned > sc->nr_reclaimed + sc->nr_reclaimed / 2) {
> +                     sc->may_writepage = 1;
> +             }
> +     }
> +
> +     sc->nr_scanned = total_scanned;
> +     return;
> +}
> +
> +/*
> + * Per cgroup background reclaim.
> + * TODO: Take off the order since memcg always do order 0
> + */
> +static unsigned long balance_mem_cgroup_pgdat(struct mem_cgroup *mem_cont,
> +                                           int order)
> +{
> +     int i, nid;
> +     int start_node;
> +     int priority;
> +     bool wmark_ok;
> +     int loop;
> +     pg_data_t *pgdat;
> +     nodemask_t do_nodes;
> +     unsigned long total_scanned;
> +     struct scan_control sc = {
> +             .gfp_mask = GFP_KERNEL,
> +             .may_unmap = 1,
> +             .may_swap = 1,
> +             .nr_to_reclaim = ULONG_MAX,
> +             .swappiness = vm_swappiness,
> +             .order = order,
> +             .mem_cgroup = mem_cont,
> +     };
> +
> +loop_again:
> +     do_nodes = NODE_MASK_NONE;
> +     sc.may_writepage = !laptop_mode;

I think may_writepage should start from '0' always. We're not sure
the system is in memory shortage...we just want to release memory
volunatary. write_page will add huge costs, I guess.

For exmaple,
       sc.may_writepage = !!loop
may be better for memcg.

BTW, you set nr_to_reclaim as ULONG_MAX here and doesn't modify it later.

I think you should add some logic to fix it to right value.

For example, before calling shrink_zone(),

sc->nr_to_reclaim = min(SWAP_CLUSETR_MAX, memcg_usage_in_this_zone() / 100);  # 1% in this zone.

if we love 'fair pressure for each zone'.






> +     sc.nr_reclaimed = 0;
> +     total_scanned = 0;
> +
> +     for (priority = DEF_PRIORITY; priority >= 0; priority--) {
> +             sc.priority = priority;
> +             wmark_ok = false;
> +             loop = 0;
> +
> +             /* The swap token gets in the way of swapout... */
> +             if (!priority)
> +                     disable_swap_token();
> +
> +             if (priority == DEF_PRIORITY)
> +                     do_nodes = node_states[N_ONLINE];
> +
> +             while (1) {
> +                     nid = mem_cgroup_select_victim_node(mem_cont,
> +                                                     &do_nodes);
> +
> +                     /* Indicate we have cycled the nodelist once
> +                      * TODO: we might add MAX_RECLAIM_LOOP for preventing
> +                      * kswapd burning cpu cycles.
> +                      */
> +                     if (loop == 0) {
> +                             start_node = nid;
> +                             loop++;
> +                     } else if (nid == start_node)
> +                             break;
> +
> +                     pgdat = NODE_DATA(nid);
> +                     balance_pgdat_node(pgdat, order, &sc);
> +                     total_scanned += sc.nr_scanned;
> +
> +                     /* Set the node which has at least
> +                      * one reclaimable zone
> +                      */
> +                     for (i = pgdat->nr_zones - 1; i >= 0; i--) {
> +                             struct zone *zone = pgdat->node_zones + i;
> +
> +                             if (!populated_zone(zone))
> +                                     continue;

How about checking whether memcg has pages on this node ?

Well, i might be able to add the following logic:

unsigned long scan;
 for_each_evictable_lru(l) {
       scan += zone_nr_lru_pages(zone, sc, l);

if (!populated_zone(zone) || !scan)
   continue;

 
> +                     }
> +                     if (i < 0)
> +                             node_clear(nid, do_nodes);
> +
> +                     if (mem_cgroup_watermark_ok(mem_cont,
> +                                                     CHARGE_WMARK_HIGH)) {
> +                             wmark_ok = true;
> +                             goto out;
> +                     }
> +
> +                     if (nodes_empty(do_nodes)) {
> +                             wmark_ok = true;
> +                             goto out;
> +                     }
> +             }
> +
> +             /* All the nodes are unreclaimable, kswapd is done */
> +             if (nodes_empty(do_nodes)) {
> +                     wmark_ok = true;
> +                     goto out;
> +             }

Can this happen ?

Hmm. This looks duplicate. I was thinking the "break" case, but the nodes_empty in the while loop should have captured that case. 

--Ying 


> +
> +             if (total_scanned && priority < DEF_PRIORITY - 2)
> +                     congestion_wait(WRITE, HZ/10);
> +
> +             if (sc.nr_reclaimed >= SWAP_CLUSTER_MAX)
> +                     break;
> +     }
> +out:
> +     if (!wmark_ok) {
> +             cond_resched();
> +
> +             try_to_freeze();
> +
> +             goto loop_again;
> +     }
> +
> +     return sc.nr_reclaimed;
> +}
> +#else
>  static unsigned long balance_mem_cgroup_pgdat(struct mem_cgroup *mem_cont,
>                                                       int order)
>  {
>       return 0;
>  }
> +#endif
>


Thanks,
-Kame



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]