On Sun, Apr 17, 2011 at 7:22 PM, Minchan Kim <minchan.kim@xxxxxxxxx> wrote:
Correct comment style.On Sat, Apr 16, 2011 at 8:23 AM, Ying Han <yinghan@xxxxxxxxxx> wrote:
> This add the mechanism for background reclaim which we remember the
> last scanned node and always starting from the next one each time.
> The simple round-robin fasion provide the fairness between nodes for
> each memcg.
>
> changelog v5..v4:
> 1. initialize the last_scanned_node to MAX_NUMNODES.
>
> changelog v4..v3:
> 1. split off from the per-memcg background reclaim patch.
>
> Signed-off-by: Ying Han <yinghan@xxxxxxxxxx>
> ---
> include/linux/memcontrol.h | 3 +++
> mm/memcontrol.c | 35 +++++++++++++++++++++++++++++++++++
> 2 files changed, 38 insertions(+), 0 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index f7ffd1f..d4ff7f2 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -88,6 +88,9 @@ extern int mem_cgroup_init_kswapd(struct mem_cgroup *mem,
> struct kswapd *kswapd_p);
> extern void mem_cgroup_clear_kswapd(struct mem_cgroup *mem);
> extern wait_queue_head_t *mem_cgroup_kswapd_wait(struct mem_cgroup *mem);
> +extern int mem_cgroup_last_scanned_node(struct mem_cgroup *mem);
> +extern int mem_cgroup_select_victim_node(struct mem_cgroup *mem,
> + const nodemask_t *nodes);
>
> static inline
> int mm_match_cgroup(const struct mm_struct *mm, const struct mem_cgroup *cgroup)
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 8761a6f..b92dc13 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -279,6 +279,11 @@ struct mem_cgroup {
> u64 high_wmark_distance;
> u64 low_wmark_distance;
>
> + /* While doing per cgroup background reclaim, we cache the
Thanks. Will change in the next post.
--Ying
--
Kind regards,
Minchan Kim