Re: memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(removed lsf-pc@xxxxxxxxxxxxxxxxxxxxxxxxxx because this really isn't
program committee matter)

On Wed, Feb 8, 2012 at 1:31 AM, Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote:
> On Tue, Feb 07, 2012 at 11:55:05PM -0800, Greg Thelen wrote:
>> On Fri, Feb 3, 2012 at 1:40 AM, Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote:
>> > If moving dirty pages out of the memcg to the 20% global dirty pages
>> > pool on page reclaim, the above OOM can be avoided. It does change the
>> > meaning of memory.limit_in_bytes in that the memcg tasks can now
>> > actually consume more pages (up to the shared global 20% dirty limit).
>>
>> This seems like an easy change, but unfortunately the global 20% pool
>> has some shortcomings for my needs:
>>
>> 1. the global 20% pool is not moderated.  One cgroup can dominate it
>>     and deny service to other cgroups.
>
> It is moderated by balance_dirty_pages() -- in terms of dirty ratelimit.
> And you have the freedom to control the bandwidth allocation with some
> async write I/O controller.
>
> Even though there is no direct control of dirty pages, we can roughly
> get it as the side effect of rate control. Given
>
>        ratelimit_cgroup_A = 2 * ratelimit_cgroup_B
>
> There will naturally be more dirty pages for cgroup A to be worked by
> the flusher. And the dirty pages will be roughly balanced around
>
>        nr_dirty_cgroup_A = 2 * nr_dirty_cgroup_B
>
> when writeout bandwidths for their dirty pages are equal.
>
>> 2. the global 20% pool is free, unaccounted memory.  Ideally cgroups only
>>     use the amount of memory specified in their memory.limit_in_bytes.  The
>>     goal is to sell portions of a system.  Global resource like the 20% are an
>>     undesirable system-wide tax that's shared by jobs that may not even
>>     perform buffered writes.
>
> Right, it is the shortcoming.
>
>> 3. Setting aside 20% extra memory for system wide dirty buffers is a lot of
>>     memory.  This becomes a larger issue when the global dirty_ratio is
>>     higher than 20%.
>
> Yeah the global pool scheme does mean that you'd better allocate at
> most 80% memory to individual memory cgroups, otherwise it's possible
> for a tiny memcg doing dd writes to push dirty pages to global LRU and
> *squeeze* the size of other memcgs.
>
> However I guess it should be mitigated by the fact that
>
> - we typically already reserve some space for the root memcg
>
> - 20% dirty ratio is mostly an overkill for large memory systems.
>  It's often enough to hold 10-30s worth of dirty data for them, which
>  is 1-3GB for one 100MB/s disk. This is the reason vm.dirty_bytes is
>  introduced: someone wants to do some <1% dirty ratio.

Have you encountered situations where it's desirable to have more than
20% dirty ratio?  I imagine that if the dirty working set is larger
than 20% increasing dirty ratio would prevent rewrites.

Leaking dirty memory to a root global dirty pool is concerning.  I
suspect that under some conditions such pages may remain remain in
root after writeback indefinitely as clean pages.  I admit this may
not be the common case, but having such leaks into root can allow low
priority jobs access entire machine denying service to higher priority
jobs.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]