On Tue, Apr 19, 2011 at 6:15 PM, KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
On Mon, 18 Apr 2011 20:57:45 -0700I'm very sorry but please drop this. There is a discussion that
Ying Han <yinghan@xxxxxxxxxx> wrote:
> This add the API which exports per-memcg kswapd thread pid. The kswapd
> thread is named as "memcg_" + css_id, and the pid can be used to put
> kswapd thread into cpu cgroup later.
>
> $ mkdir /dev/cgroup/memory/A
> $ cat /dev/cgroup/memory/A/memory.kswapd_pid
> memcg_null 0
>
> $ echo 500m >/dev/cgroup/memory/A/memory.limit_in_bytes
> $ echo 50m >/dev/cgroup/memory/A/memory.high_wmark_distance
> $ ps -ef | grep memcg
> root 6727 2 0 14:32 ? 00:00:00 [memcg_3]
> root 6729 6044 0 14:32 ttyS0 00:00:00 grep memcg
>
> $ cat memory.kswapd_pid
> memcg_3 6727
>
> changelog v6..v5
> 1. Remove the legacy spinlock which has been removed from previous post.
>
> changelog v5..v4
> 1. Initialize the memcg-kswapd pid to -1 instead of 0.
> 2. Remove the kswapds_spinlock.
>
> changelog v4..v3
> 1. Add the API based on KAMAZAWA's request on patch v3.
>
> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
> Signed-off-by: Ying Han <yinghan@xxxxxxxxxx>
we should use thread pool rather than one-thread-per-one-memcg.
If so, we need to remove this interface and we'll see regression.
I think we need some control knobs as priority/share in thread pools finally...
(So, I want to use cpu cgroup.) If not, there will be unfair utilization of
cpu/thread. But for now, it seems adding this is too early.
This patch is is very good self-contained and i have no problem to drop it for now. And I won't include this in my next post.
--Ying
> ---
> mm/memcontrol.c | 31 +++++++++++++++++++++++++++++++
> 1 files changed, 31 insertions(+), 0 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index d5b284c..0b108b9 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -4533,6 +4533,33 @@ static int mem_cgroup_wmark_read(struct cgroup *cgrp,
> return 0;
> }
>
> +static int mem_cgroup_kswapd_pid_read(struct cgroup *cgrp,
> + struct cftype *cft, struct cgroup_map_cb *cb)
> +{
> + struct mem_cgroup *mem = mem_cgroup_from_cont(cgrp);
> + struct task_struct *kswapd_thr = NULL;
> + struct kswapd *kswapd_p = NULL;
> + wait_queue_head_t *wait;
> + char name[TASK_COMM_LEN];
> + pid_t pid = -1;
> +
> + sprintf(name, "memcg_null");
> +
> + wait = mem_cgroup_kswapd_wait(mem);
> + if (wait) {
> + kswapd_p = container_of(wait, struct kswapd, kswapd_wait);
> + kswapd_thr = kswapd_p->kswapd_task;
> + if (kswapd_thr) {
> + get_task_comm(name, kswapd_thr);
> + pid = kswapd_thr->pid;
> + }
> + }
> +
> + cb->fill(cb, name, pid);
> +
> + return 0;
> +}
> +
> static int mem_cgroup_oom_control_read(struct cgroup *cgrp,
> struct cftype *cft, struct cgroup_map_cb *cb)
> {
> @@ -4650,6 +4677,10 @@ static struct cftype mem_cgroup_files[] = {
> .name = "reclaim_wmarks",
> .read_map = mem_cgroup_wmark_read,
> },
> + {
> + .name = "kswapd_pid",
> + .read_map = mem_cgroup_kswapd_pid_read,
> + },
> };
>
> #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
> --
> 1.7.3.1
>
>