On Wed, Feb 8, 2012 at 1:31 AM, Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote: > On Tue, Feb 07, 2012 at 11:55:05PM -0800, Greg Thelen wrote: >> On Fri, Feb 3, 2012 at 1:40 AM, Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote: >> > If moving dirty pages out of the memcg to the 20% global dirty pages >> > pool on page reclaim, the above OOM can be avoided. It does change the >> > meaning of memory.limit_in_bytes in that the memcg tasks can now >> > actually consume more pages (up to the shared global 20% dirty limit). >> >> This seems like an easy change, but unfortunately the global 20% pool >> has some shortcomings for my needs: >> >> 1. the global 20% pool is not moderated. One cgroup can dominate it >> and deny service to other cgroups. > > It is moderated by balance_dirty_pages() -- in terms of dirty ratelimit. > And you have the freedom to control the bandwidth allocation with some > async write I/O controller. > > Even though there is no direct control of dirty pages, we can roughly > get it as the side effect of rate control. Given > > ratelimit_cgroup_A = 2 * ratelimit_cgroup_B > > There will naturally be more dirty pages for cgroup A to be worked by > the flusher. And the dirty pages will be roughly balanced around > > nr_dirty_cgroup_A = 2 * nr_dirty_cgroup_B > > when writeout bandwidths for their dirty pages are equal. > >> 2. the global 20% pool is free, unaccounted memory. Ideally cgroups only >> use the amount of memory specified in their memory.limit_in_bytes. The >> goal is to sell portions of a system. Global resource like the 20% are an >> undesirable system-wide tax that's shared by jobs that may not even >> perform buffered writes. > > Right, it is the shortcoming. > >> 3. Setting aside 20% extra memory for system wide dirty buffers is a lot of >> memory. This becomes a larger issue when the global dirty_ratio is >> higher than 20%. > > Yeah the global pool scheme does mean that you'd better allocate at > most 80% memory to individual memory cgroups, otherwise it's possible > for a tiny memcg doing dd writes to push dirty pages to global LRU and > *squeeze* the size of other memcgs. > > However I guess it should be mitigated by the fact that > > - we typically already reserve some space for the root memcg Can you give more details on that? AFAIK, we don't treat root cgroup differently than other sub-cgroups, except root cgroup doesn't have limit. In general, I don't like the idea of shared pool in root for all the dirty pages. Imagining a system which has nothing running under root and every application runs within sub-cgroup. It is easy to track and limit each cgroup's memory usage, but not the pages being moved to root. We have been experiencing difficulties of tracking pages being re-parented to root, and this will make it even harder. --Ying > > - 20% dirty ratio is mostly an overkill for large memory systems. > It's often enough to hold 10-30s worth of dirty data for them, which > is 1-3GB for one 100MB/s disk. This is the reason vm.dirty_bytes is > introduced: someone wants to do some <1% dirty ratio. > > Thanks, > Fengguang -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href