Re: [PATCH 2/2] memcg: share event counter rather than duplicate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 12, 2010 at 10:19 AM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> On Fri, 12 Feb 2010 10:07:25 +0200
> "Kirill A. Shutemov" <kirill@xxxxxxxxxxxxx> wrote:
>
>> On Fri, Feb 12, 2010 at 8:48 AM, KAMEZAWA Hiroyuki
>> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
>> > Memcg has 2 eventcountes which counts "the same" event. Just usages are
>> > different from each other. This patch tries to reduce event counter.
>> >
>> > This patch's logic uses "only increment, no reset" new_counter and masks for each
>> > checks. Softlimit chesk was done per 1000 events. So, the similar check
>> > can be done by !(new_counter & 0x3ff). Threshold check was done per 100
>> > events. So, the similar check can be done by (!new_counter & 0x7f)
>>
>> IIUC, with this change we have to check counter after each update,
>> since we check
>> for exact value.
>
> Yes.
>> So we have to move checks to mem_cgroup_charge_statistics() or
>> call them after each statistics charging. I'm not sure how it affects
>> performance.
>>
>
> My patch 1/2 does it.
>
> But hmm, move-task does counter updates in asynchronous manner. Then, there are
> bug. I'll add check in the next version.
>
> Maybe calling update_tree and threshold_check at the end of mova_task is
> better. Does thresholds user take care of batched-move manner in task_move ?
> Should we check one by one ?

No. mem_cgroup_threshold() at mem_cgroup_move_task() is enough.

But... Is task moving a critical path? If no, It's, probably, cleaner to check
everything at mem_cgroup_charge_statistics().

> (Maybe there will be another trouble when we handle hugepages...)

Yes, hugepages support requires more testing.

> Thanks,
> -Kame
>
>
>> > Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx>
>> > Cc: Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx>
>> > Cc: Daisuke Nishimura <nishimura@xxxxxxxxxxxxxxxxx>
>> > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
>> > ---
>> >  mm/memcontrol.c |   36 ++++++++++++------------------------
>> >  1 file changed, 12 insertions(+), 24 deletions(-)
>> >
>> > Index: mmotm-2.6.33-Feb10/mm/memcontrol.c
>> > ===================================================================
>> > --- mmotm-2.6.33-Feb10.orig/mm/memcontrol.c
>> > +++ mmotm-2.6.33-Feb10/mm/memcontrol.c
>> > @@ -63,8 +63,8 @@ static int really_do_swap_account __init
>> >  #define do_swap_account                (0)
>> >  #endif
>> >
>> > -#define SOFTLIMIT_EVENTS_THRESH (1000)
>> > -#define THRESHOLDS_EVENTS_THRESH (100)
>> > +#define SOFTLIMIT_EVENTS_THRESH (0x3ff) /* once in 1024 */
>> > +#define THRESHOLDS_EVENTS_THRESH (0x7f) /* once in 128 */
>> >
>> >  /*
>> >  * Statistics for memory cgroup.
>> > @@ -79,10 +79,7 @@ enum mem_cgroup_stat_index {
>> >        MEM_CGROUP_STAT_PGPGIN_COUNT,   /* # of pages paged in */
>> >        MEM_CGROUP_STAT_PGPGOUT_COUNT,  /* # of pages paged out */
>> >        MEM_CGROUP_STAT_SWAPOUT, /* # of pages, swapped out */
>> > -       MEM_CGROUP_STAT_SOFTLIMIT, /* decrements on each page in/out.
>> > -                                       used by soft limit implementation */
>> > -       MEM_CGROUP_STAT_THRESHOLDS, /* decrements on each page in/out.
>> > -                                       used by threshold implementation */
>> > +       MEM_CGROUP_EVENTS,      /* incremented by 1 at pagein/pageout */
>> >
>> >        MEM_CGROUP_STAT_NSTATS,
>> >  };
>> > @@ -394,16 +391,12 @@ mem_cgroup_remove_exceeded(struct mem_cg
>> >
>> >  static bool mem_cgroup_soft_limit_check(struct mem_cgroup *mem)
>> >  {
>> > -       bool ret = false;
>> >        s64 val;
>> >
>> > -       val = this_cpu_read(mem->stat->count[MEM_CGROUP_STAT_SOFTLIMIT]);
>> > -       if (unlikely(val < 0)) {
>> > -               this_cpu_write(mem->stat->count[MEM_CGROUP_STAT_SOFTLIMIT],
>> > -                               SOFTLIMIT_EVENTS_THRESH);
>> > -               ret = true;
>> > -       }
>> > -       return ret;
>> > +       val = this_cpu_read(mem->stat->count[MEM_CGROUP_EVENTS]);
>> > +       if (unlikely(!(val & SOFTLIMIT_EVENTS_THRESH)))
>> > +               return true;
>> > +       return false;
>> >  }
>> >
>> >  static void mem_cgroup_update_tree(struct mem_cgroup *mem, struct page *page)
>> > @@ -542,8 +535,7 @@ static void mem_cgroup_charge_statistics
>> >                __this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_PGPGIN_COUNT]);
>> >        else
>> >                __this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_PGPGOUT_COUNT]);
>> > -       __this_cpu_dec(mem->stat->count[MEM_CGROUP_STAT_SOFTLIMIT]);
>> > -       __this_cpu_dec(mem->stat->count[MEM_CGROUP_STAT_THRESHOLDS]);
>> > +       __this_cpu_dec(mem->stat->count[MEM_CGROUP_EVENTS]);
>> >
>> >        preempt_enable();
>> >  }
>> > @@ -3211,16 +3203,12 @@ static int mem_cgroup_swappiness_write(s
>> >
>> >  static bool mem_cgroup_threshold_check(struct mem_cgroup *mem)
>> >  {
>> > -       bool ret = false;
>> >        s64 val;
>> >
>> > -       val = this_cpu_read(mem->stat->count[MEM_CGROUP_STAT_THRESHOLDS]);
>> > -       if (unlikely(val < 0)) {
>> > -               this_cpu_write(mem->stat->count[MEM_CGROUP_STAT_THRESHOLDS],
>> > -                               THRESHOLDS_EVENTS_THRESH);
>> > -               ret = true;
>> > -       }
>> > -       return ret;
>> > +       val = this_cpu_read(mem->stat->count[MEM_CGROUP_EVENTS]);
>> > +       if (unlikely(!(val & THRESHOLDS_EVENTS_THRESH)))
>> > +               return true;
>> > +       return false;
>> >  }
>> >
>> >  static void __mem_cgroup_threshold(struct mem_cgroup *memcg, bool swap)
>> >
>> >
>>
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]