Re: [PATCH 6/8] memcg asynchronous memory reclaim interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 23 May 2011 16:36:20 -0700
Ying Han <yinghan@xxxxxxxxxx> wrote:

> On Fri, May 20, 2011 at 4:56 PM, Hiroyuki Kamezawa
> <kamezawa.hiroyuki@xxxxxxxxx> wrote:
> > 2011/5/21 Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>:
> >> On Fri, 20 May 2011 12:46:36 +0900
> >> KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> >>
> >>> This patch adds a logic to keep usage margin to the limit in asynchronous way.
> >>> When the usage over some threshould (determined automatically), asynchronous
> >>> memory reclaim runs and shrink memory to limit - MEMCG_ASYNC_STOP_MARGIN.
> >>>
> >>> By this, there will be no difference in total amount of usage of cpu to
> >>> scan the LRU
> >>
> >> This is not true if "don't writepage at all (revisit this when
> >> dirty_ratio comes.)" is true. ÂSkipping over dirty pages can cause
> >> larger amounts of CPU consumption.
> >>
> >>> but we'll have a chance to make use of wait time of applications
> >>> for freeing memory. For example, when an application read a file or socket,
> >>> to fill the newly alloated memory, it needs wait. Async reclaim can make use
> >>> of that time and give a chance to reduce latency by background works.
> >>>
> >>> This patch only includes required hooks to trigger async reclaim and user interfaces.
> >>> Core logics will be in the following patches.
> >>>
> >>>
> >>> ...
> >>>
> >>> Â/*
> >>> + * For example, with transparent hugepages, memory reclaim scan at hitting
> >>> + * limit can very long as to reclaim HPAGE_SIZE of memory. This increases
> >>> + * latency of page fault and may cause fallback. At usual page allocation,
> >>> + * we'll see some (shorter) latency, too. To reduce latency, it's appreciated
> >>> + * to free memory in background to make margin to the limit. This consumes
> >>> + * cpu but we'll have a chance to make use of wait time of applications
> >>> + * (read disk etc..) by asynchronous reclaim.
> >>> + *
> >>> + * This async reclaim tries to reclaim HPAGE_SIZE * 2 of pages when margin
> >>> + * to the limit is smaller than HPAGE_SIZE * 2. This will be enabled
> >>> + * automatically when the limit is set and it's greater than the threshold.
> >>> + */
> >>> +#if HPAGE_SIZE != PAGE_SIZE
> >>> +#define MEMCG_ASYNC_LIMIT_THRESH Â Â Â(HPAGE_SIZE * 64)
> >>> +#define MEMCG_ASYNC_MARGIN Â Â Â Â (HPAGE_SIZE * 4)
> >>> +#else /* make the margin as 4M bytes */
> >>> +#define MEMCG_ASYNC_LIMIT_THRESH Â Â Â(128 * 1024 * 1024)
> >>> +#define MEMCG_ASYNC_MARGIN Â Â Â Â Â Â(8 * 1024 * 1024)
> >>> +#endif
> >>
> >> Document them, please. ÂHow are they used, what are their units.
> >>
> >
> > will do.
> >
> >
> >>> +static void mem_cgroup_may_async_reclaim(struct mem_cgroup *mem);
> >>> +
> >>> +/*
> >>> Â * The memory controller data structure. The memory controller controls both
> >>> Â * page cache and RSS per cgroup. We would eventually like to provide
> >>> Â * statistics based on the statistics developed by Rik Van Riel for clock-pro,
> >>> @@ -278,6 +303,12 @@ struct mem_cgroup {
> >>> Â Â Â Â*/
> >>>    unsigned long  move_charge_at_immigrate;
> >>> Â Â Â /*
> >>> + Â Â Â* Checks for async reclaim.
> >>> + Â Â Â*/
> >>> +   unsigned long  async_flags;
> >>> +#define AUTO_ASYNC_ENABLED Â (0)
> >>> +#define USE_AUTO_ASYNC Â Â Â Â Â Â Â (1)
> >>
> >> These are really confusing. ÂI looked at the implementation and at the
> >> documentation file and I'm still scratching my head. ÂI can't work out
> >> why they exist. ÂWith the amount of effort I put into it ;)
> >>
> >> Also, AUTO_ASYNC_ENABLED and USE_AUTO_ASYNC have practically the same
> >> meaning, which doesn't help things.
> >>
> > Ah, yes it's confusing.
> 
> Sorry I was confused by the memory.async_control interface. I assume
> that is the knob to turn on/off the bg reclaim on per-memcg basis. But
> when I tried to turn it off, it seems not working well:
> 
> $ cat /proc/7248/cgroup
> 3:memory:/A
> 
> $ cat /dev/cgroup/memory/A/memory.async_control
> 0
> 

If enabled and kworker runs, this shows "3", for now.
I'll make this simpler in the next post.

> Then i can see the kworkers start running when the memcg A under
> memory pressure. There was no other memcgs configured under root.


What kworkers ? For example, many kworkers runs on ext4? on my host.
If kworker/u:x works, it may be for memcg (for my host)

Ok, I'll add statistics in v3.

Thanks,
-Kame

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]