Sorry, I disagree this change. mem_cgroup_soft_limit_check() is used for checking how much current usage exceeds the soft_limit_in_bytes and updating softlimit tree asynchronously, instead of checking every charge/uncharge. What if you change the soft_limit_in_bytes, but the number of charges and uncharges are very balanced afterwards ? The softlimit tree will not be updated for a long time. And IIUC, it's the same for your threshold feature, right ? I think it would be better: - discard this change. - in 4/4, rename mem_cgroup_soft_limit_check to mem_cgroup_event_check, and instead of adding a new STAT counter, do like: if (mem_cgroup_event_check(mem)) { mem_cgroup_update_tree(mem, page); mem_cgroup_threshold(mem); } Ah, yes. Current code doesn't call mem_cgroup_soft_limit_check() for root cgroup in charge path as you said in http://marc.info/?l=linux-mm&m=126021128400687&w=2. I think you can change there as you want, I can change my patch (http://marc.info/?l=linux-mm&m=126023467303178&w=2, it has not yet sent to Andrew anyway) to check mem_cgroup_is_root() in mem_cgroup_update_tree(). Thanks, Daisuke Nishimura. On Sat, 12 Dec 2009 00:59:18 +0200 "Kirill A. Shutemov" <kirill@xxxxxxxxxxxxx> wrote: > Instead of incrementing counter on each page in/out and comparing it > with constant, we set counter to constant, decrement counter on each > page in/out and compare it with zero. We want to make comparing as fast > as possible. On many RISC systems (probably not only RISC) comparing > with zero is more effective than comparing with a constant, since not > every constant can be immediate operand for compare instruction. > > Also, I've renamed MEM_CGROUP_STAT_EVENTS to MEM_CGROUP_STAT_SOFTLIMIT, > since really it's not a generic counter. > > Signed-off-by: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> > --- > mm/memcontrol.c | 19 ++++++++++++++----- > 1 files changed, 14 insertions(+), 5 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 0ff65ed..c6081cc 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -69,8 +69,9 @@ enum mem_cgroup_stat_index { > MEM_CGROUP_STAT_MAPPED_FILE, /* # of pages charged as file rss */ > MEM_CGROUP_STAT_PGPGIN_COUNT, /* # of pages paged in */ > MEM_CGROUP_STAT_PGPGOUT_COUNT, /* # of pages paged out */ > - MEM_CGROUP_STAT_EVENTS, /* sum of pagein + pageout for internal use */ > MEM_CGROUP_STAT_SWAPOUT, /* # of pages, swapped out */ > + MEM_CGROUP_STAT_SOFTLIMIT, /* decrements on each page in/out. > + used by soft limit implementation */ > > MEM_CGROUP_STAT_NSTATS, > }; > @@ -90,6 +91,13 @@ __mem_cgroup_stat_reset_safe(struct mem_cgroup_stat_cpu *stat, > stat->count[idx] = 0; > } > > +static inline void > +__mem_cgroup_stat_set(struct mem_cgroup_stat_cpu *stat, > + enum mem_cgroup_stat_index idx, s64 val) > +{ > + stat->count[idx] = val; > +} > + > static inline s64 > __mem_cgroup_stat_read_local(struct mem_cgroup_stat_cpu *stat, > enum mem_cgroup_stat_index idx) > @@ -374,9 +382,10 @@ static bool mem_cgroup_soft_limit_check(struct mem_cgroup *mem) > > cpu = get_cpu(); > cpustat = &mem->stat.cpustat[cpu]; > - val = __mem_cgroup_stat_read_local(cpustat, MEM_CGROUP_STAT_EVENTS); > - if (unlikely(val > SOFTLIMIT_EVENTS_THRESH)) { > - __mem_cgroup_stat_reset_safe(cpustat, MEM_CGROUP_STAT_EVENTS); > + val = __mem_cgroup_stat_read_local(cpustat, MEM_CGROUP_STAT_SOFTLIMIT); > + if (unlikely(val < 0)) { > + __mem_cgroup_stat_set(cpustat, MEM_CGROUP_STAT_SOFTLIMIT, > + SOFTLIMIT_EVENTS_THRESH); > ret = true; > } > put_cpu(); > @@ -509,7 +518,7 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, > else > __mem_cgroup_stat_add_safe(cpustat, > MEM_CGROUP_STAT_PGPGOUT_COUNT, 1); > - __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_EVENTS, 1); > + __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_SOFTLIMIT, -1); > put_cpu(); > } > > -- > 1.6.5.3 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers