Re: [PATCH V2] memcg: add mlock statistic in memory.stat

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 18, 2012 at 4:33 PM, Andrew Morton
<akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
> On Wed, 18 Apr 2012 11:21:55 -0700
> Ying Han <yinghan@xxxxxxxxxx> wrote:
>
>> We have the nr_mlock stat both in meminfo as well as vmstat system wide, this
>> patch adds the mlock field into per-memcg memory stat. The stat itself enhances
>> the metrics exported by memcg since the unevictable lru includes more than
>> mlock()'d page like SHM_LOCK'd.
>>
>> Why we need to count mlock'd pages while they are unevictable and we can not
>> do much on them anyway?
>>
>> This is true. The mlock stat I am proposing is more helpful for system admin
>> and kernel developer to understand the system workload. The same information
>> should be helpful to add into OOM log as well. Many times in the past that we
>> need to read the mlock stat from the per-container meminfo for different
>> reason. Afterall, we do have the ability to read the mlock from meminfo and
>> this patch fills the info in memcg.
>>
>>
>> ...
>>
>>  static inline int is_mlocked_vma(struct vm_area_struct *vma, struct page *page)
>>  {
>> +     bool locked;
>> +     unsigned long flags;
>> +
>>       VM_BUG_ON(PageLRU(page));
>>
>>       if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED))
>>               return 0;
>>
>> +     mem_cgroup_begin_update_page_stat(page, &locked, &flags);
>>       if (!TestSetPageMlocked(page)) {
>>               inc_zone_page_state(page, NR_MLOCK);
>> +             mem_cgroup_inc_page_stat(page, MEMCG_NR_MLOCK);
>>               count_vm_event(UNEVICTABLE_PGMLOCKED);
>>       }
>> +     mem_cgroup_end_update_page_stat(page, &locked, &flags);
>> +
>>       return 1;
>>  }
>
> Unrelated to this patch: is_mlocked_vma() is misnamed.  A function with
> that name should be a bool-returning test which has no side-effects.

That is true. Maybe a separate patch to fix that up :)

>
>>
>> ...
>>
>>  static void __free_pages_ok(struct page *page, unsigned int order)
>>  {
>>       unsigned long flags;
>> -     int wasMlocked = __TestClearPageMlocked(page);
>> +     bool locked;
>>
>>       if (!free_pages_prepare(page, order))
>>               return;
>>
>>       local_irq_save(flags);
>> -     if (unlikely(wasMlocked))
>> +     mem_cgroup_begin_update_page_stat(page, &locked, &flags);
>
> hm, what's going on here.  The page now has a zero refcount and is to
> be returned to the buddy.  But mem_cgroup_begin_update_page_stat()
> assumes that the page still belongs to a memcg.  I'd have thought that
> any page_cgroup backreferences would have been torn down by now?

True, I missed that at the first place. This will trigger GPF easily
if the memcg is destroyed after the charge drops to 0.

The problem is the time window between mem_cgroup_uncharge_page() and
free_hot_cold_page() which the later one calls
__TestClearPageMlocked(page).

I am wondering whether we can move the __TestClearPageMlocked(page)
earlier, before memcg_cgroup_uncharge_page(). Is there a particular
reason why the Clear Mlock bit has to be the last moment ?

--Ying
>
>> +     if (unlikely(__TestClearPageMlocked(page)))
>>               free_page_mlock(page);
>
> And if the page _is_ still accessible via cgroup lookup, the use of the
> nonatomic RMW is dangerous.
>
>>       __count_vm_events(PGFREE, 1 << order);
>>       free_one_page(page_zone(page), page, order,
>>                                       get_pageblock_migratetype(page));
>> +     mem_cgroup_end_update_page_stat(page, &locked, &flags);
>>       local_irq_restore(flags);
>>  }
>>
>> @@ -1250,7 +1256,7 @@ void free_hot_cold_page(struct page *page, int cold)
>
> The same comments apply in free_hot_cold_page().
>
>>       struct per_cpu_pages *pcp;
>>       unsigned long flags;
>>       int migratetype;
>> -     int wasMlocked = __TestClearPageMlocked(page);
>> +     bool locked;
>>
>>       if (!free_pages_prepare(page, 0))
>>               return;
>>
>> ...
>>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]