On Mon, May 23, 2011 at 5:11 PM, KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > On Mon, 23 May 2011 16:36:20 -0700 > Ying Han <yinghan@xxxxxxxxxx> wrote: > >> On Fri, May 20, 2011 at 4:56 PM, Hiroyuki Kamezawa >> <kamezawa.hiroyuki@xxxxxxxxx> wrote: >> > 2011/5/21 Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>: >> >> On Fri, 20 May 2011 12:46:36 +0900 >> >> KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: >> >> >> >>> This patch adds a logic to keep usage margin to the limit in asynchronous way. >> >>> When the usage over some threshould (determined automatically), asynchronous >> >>> memory reclaim runs and shrink memory to limit - MEMCG_ASYNC_STOP_MARGIN. >> >>> >> >>> By this, there will be no difference in total amount of usage of cpu to >> >>> scan the LRU >> >> >> >> This is not true if "don't writepage at all (revisit this when >> >> dirty_ratio comes.)" is true. Skipping over dirty pages can cause >> >> larger amounts of CPU consumption. >> >> >> >>> but we'll have a chance to make use of wait time of applications >> >>> for freeing memory. For example, when an application read a file or socket, >> >>> to fill the newly alloated memory, it needs wait. Async reclaim can make use >> >>> of that time and give a chance to reduce latency by background works. >> >>> >> >>> This patch only includes required hooks to trigger async reclaim and user interfaces. >> >>> Core logics will be in the following patches. >> >>> >> >>> >> >>> ... >> >>> >> >>> /* >> >>> + * For example, with transparent hugepages, memory reclaim scan at hitting >> >>> + * limit can very long as to reclaim HPAGE_SIZE of memory. This increases >> >>> + * latency of page fault and may cause fallback. At usual page allocation, >> >>> + * we'll see some (shorter) latency, too. To reduce latency, it's appreciated >> >>> + * to free memory in background to make margin to the limit. This consumes >> >>> + * cpu but we'll have a chance to make use of wait time of applications >> >>> + * (read disk etc..) by asynchronous reclaim. >> >>> + * >> >>> + * This async reclaim tries to reclaim HPAGE_SIZE * 2 of pages when margin >> >>> + * to the limit is smaller than HPAGE_SIZE * 2. This will be enabled >> >>> + * automatically when the limit is set and it's greater than the threshold. >> >>> + */ >> >>> +#if HPAGE_SIZE != PAGE_SIZE >> >>> +#define MEMCG_ASYNC_LIMIT_THRESH (HPAGE_SIZE * 64) >> >>> +#define MEMCG_ASYNC_MARGIN (HPAGE_SIZE * 4) >> >>> +#else /* make the margin as 4M bytes */ >> >>> +#define MEMCG_ASYNC_LIMIT_THRESH (128 * 1024 * 1024) >> >>> +#define MEMCG_ASYNC_MARGIN (8 * 1024 * 1024) >> >>> +#endif >> >> >> >> Document them, please. How are they used, what are their units. >> >> >> > >> > will do. >> > >> > >> >>> +static void mem_cgroup_may_async_reclaim(struct mem_cgroup *mem); >> >>> + >> >>> +/* >> >>> * The memory controller data structure. The memory controller controls both >> >>> * page cache and RSS per cgroup. We would eventually like to provide >> >>> * statistics based on the statistics developed by Rik Van Riel for clock-pro, >> >>> @@ -278,6 +303,12 @@ struct mem_cgroup { >> >>> */ >> >>> unsigned long move_charge_at_immigrate; >> >>> /* >> >>> + * Checks for async reclaim. >> >>> + */ >> >>> + unsigned long async_flags; >> >>> +#define AUTO_ASYNC_ENABLED (0) >> >>> +#define USE_AUTO_ASYNC (1) >> >> >> >> These are really confusing. I looked at the implementation and at the >> >> documentation file and I'm still scratching my head. I can't work out >> >> why they exist. With the amount of effort I put into it ;) >> >> >> >> Also, AUTO_ASYNC_ENABLED and USE_AUTO_ASYNC have practically the same >> >> meaning, which doesn't help things. >> >> >> > Ah, yes it's confusing. >> >> Sorry I was confused by the memory.async_control interface. I assume >> that is the knob to turn on/off the bg reclaim on per-memcg basis. But >> when I tried to turn it off, it seems not working well: >> >> $ cat /proc/7248/cgroup >> 3:memory:/A >> >> $ cat /dev/cgroup/memory/A/memory.async_control >> 0 >> > > If enabled and kworker runs, this shows "3", for now. > I'll make this simpler in the next post. > >> Then i can see the kworkers start running when the memcg A under >> memory pressure. There was no other memcgs configured under root. > > > What kworkers ? For example, many kworkers runs on ext4? on my host. > If kworker/u:x works, it may be for memcg (for my host) I am kind of sure they are kworkers from memcg. They start running right after my test and then stop when i kill that test. $ cat /dev/cgroup/memory/A/memory.limit_in_bytes 2147483648 $ cat /dev/cgroup/memory/A/memory.async_control 0 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 393 root 20 0 0 0 0 S 54 0.0 1:30.36 kworker/7:1 391 root 20 0 0 0 0 S 51 0.0 1:42.35 kworker/5:1 390 root 20 0 0 0 0 S 43 0.0 1:45.55 kworker/4:1 11 root 20 0 0 0 0 S 40 0.0 1:36.98 kworker/1:0 14 root 20 0 0 0 0 S 36 0.0 1:47.04 kworker/0:1 389 root 20 0 0 0 0 S 24 0.0 0:47.35 kworker/3:1 20071 root 20 0 20.0g 497m 497m D 12 1.5 0:04.99 memtoy 392 root 20 0 0 0 0 S 10 0.0 1:26.43 kworker/6:1 --Ying > > Ok, I'll add statistics in v3. > > Thanks, > -Kame > > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href