>> This means they guarantee that even they are preemted the vm >> counter won't be modified incorrectly. Because the counter is page-related >> (e.g., a new anon page added), and they are exclusively hold the pte lock. > >But there are multiple pte locks for numerous page. Another process could >modify the counter because the pte lock for a different page was >available which would cause counter corruption. > > >> So, as you concludes in the other mail that __modd_zone_page_stat >> couldn't be used. >> in mlocked_vma_newpage, then what qualifies other call sites for using >> it, in the same situation? Thanks, now everything is clear. I've renewed the patch, would you please review it? Thanks! ---<8--- mm: use the light version __mod_zone_page_state in mlocked_vma_newpage() mlocked_vma_newpage() is called with pte lock held(a spinlock), which implies preemtion disabled, and the vm stat counter is not modified from interrupt context, so we need not use an irq-safe mod_zone_page_state() here, using a light-weight version __mod_zone_page_state() would be OK. This patch also documents __mod_zone_page_state() and some of its callsites. The comment above __mod_zone_page_state() is from Hugh Dickins, and acked by Christoph. Most credits to Hugh and Christoph for the clarification on the usage of the __mod_zone_page_state(). Suggested-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> Signed-off-by: Jianyu Zhan <nasa4836@xxxxxxxxx> --- mm/internal.h | 7 ++++++- mm/rmap.c | 10 ++++++++++ mm/vmstat.c | 4 +++- 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 07b6736..53d439e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -196,7 +196,12 @@ static inline int mlocked_vma_newpage(struct vm_area_struct *vma, return 0; if (!TestSetPageMlocked(page)) { - mod_zone_page_state(page_zone(page), NR_MLOCK, + /* + * We use the irq-unsafe __mod_zone_page_stat because + * this counter is not modified from interrupt context, and the + * pte lock is held(spinlock), which implies preemtion disabled. + */ + __mod_zone_page_state(page_zone(page), NR_MLOCK, hpage_nr_pages(page)); count_vm_event(UNEVICTABLE_PGMLOCKED); } diff --git a/mm/rmap.c b/mm/rmap.c index 9c3e773..2fa4375 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -986,6 +986,11 @@ void do_page_add_anon_rmap(struct page *page, { int first = atomic_inc_and_test(&page->_mapcount); if (first) { + /* + * We use the irq-unsafe __{inc|mod}_zone_page_stat because + * these counters are not modified in interrupt context, and + * pte lock(a spinlock) is held, which implies preemtion disabled. + */ if (PageTransHuge(page)) __inc_zone_page_state(page, NR_ANON_TRANSPARENT_HUGEPAGES); @@ -1077,6 +1082,11 @@ void page_remove_rmap(struct page *page) /* * Hugepages are not counted in NR_ANON_PAGES nor NR_FILE_MAPPED * and not charged by memcg for now. + * + * We use the irq-unsafe __{inc|mod}_zone_page_stat because + * these counters are not modified in interrupt context, and + * these counters are not modified in interrupt context, and + * pte lock(a spinlock) is held, which implies preemtion disabled. */ if (unlikely(PageHuge(page))) goto out; diff --git a/mm/vmstat.c b/mm/vmstat.c index 302dd07..704928e 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -207,7 +207,9 @@ void set_pgdat_percpu_threshold(pg_data_t *pgdat, } /* - * For use when we know that interrupts are disabled. + * For use when we know that interrupts are disabled, + * or when we know that preemption is disabled and that + * particular counter cannot be updated from interrupt context. */ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item, int delta) -- 2.0.0-rc1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>