On Sun, 27 Dec 2009 04:09:01 +0200, "Kirill A. Shutemov" <kirill@xxxxxxxxxxxxx> wrote: > Instead of incrementing counter on each page in/out and comparing it > with constant, we set counter to constant, decrement counter on each > page in/out and compare it with zero. We want to make comparing as fast > as possible. On many RISC systems (probably not only RISC) comparing > with zero is more effective than comparing with a constant, since not > every constant can be immediate operand for compare instruction. > > Also, I've renamed MEM_CGROUP_STAT_EVENTS to MEM_CGROUP_STAT_SOFTLIMIT, > since really it's not a generic counter. > > Signed-off-by: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> > --- > mm/memcontrol.c | 19 ++++++++++++++----- > 1 files changed, 14 insertions(+), 5 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 1d71cb4..36eb7af 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -69,8 +69,9 @@ enum mem_cgroup_stat_index { > MEM_CGROUP_STAT_FILE_MAPPED, /* # of pages charged as file rss */ > MEM_CGROUP_STAT_PGPGIN_COUNT, /* # of pages paged in */ > MEM_CGROUP_STAT_PGPGOUT_COUNT, /* # of pages paged out */ > - MEM_CGROUP_STAT_EVENTS, /* sum of pagein + pageout for internal use */ > MEM_CGROUP_STAT_SWAPOUT, /* # of pages, swapped out */ > + MEM_CGROUP_STAT_SOFTLIMIT, /* decrements on each page in/out. > + used by soft limit implementation */ > > MEM_CGROUP_STAT_NSTATS, > }; > @@ -90,6 +91,13 @@ __mem_cgroup_stat_reset_safe(struct mem_cgroup_stat_cpu *stat, > stat->count[idx] = 0; > } > > +static inline void > +__mem_cgroup_stat_set(struct mem_cgroup_stat_cpu *stat, > + enum mem_cgroup_stat_index idx, s64 val) > +{ > + stat->count[idx] = val; > +} > + I think it would be better to name it __mem_cgroup_stat_set_safe. And could you remove the definition of __mem_cgroup_stat_reset ? Thanks, Daisuke Nishimura. > static inline s64 > __mem_cgroup_stat_read_local(struct mem_cgroup_stat_cpu *stat, > enum mem_cgroup_stat_index idx) > @@ -380,9 +388,10 @@ static bool mem_cgroup_soft_limit_check(struct mem_cgroup *mem) > > cpu = get_cpu(); > cpustat = &mem->stat.cpustat[cpu]; > - val = __mem_cgroup_stat_read_local(cpustat, MEM_CGROUP_STAT_EVENTS); > - if (unlikely(val > SOFTLIMIT_EVENTS_THRESH)) { > - __mem_cgroup_stat_reset_safe(cpustat, MEM_CGROUP_STAT_EVENTS); > + val = __mem_cgroup_stat_read_local(cpustat, MEM_CGROUP_STAT_SOFTLIMIT); > + if (unlikely(val < 0)) { > + __mem_cgroup_stat_set(cpustat, MEM_CGROUP_STAT_SOFTLIMIT, > + SOFTLIMIT_EVENTS_THRESH); > ret = true; > } > put_cpu(); > @@ -515,7 +524,7 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, > else > __mem_cgroup_stat_add_safe(cpustat, > MEM_CGROUP_STAT_PGPGOUT_COUNT, 1); > - __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_EVENTS, 1); > + __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_SOFTLIMIT, -1); > put_cpu(); > } > > -- > 1.6.5.7 > _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers