On Mon, May 18, 2015 at 04:32:22PM +0200, Vlastimil Babka wrote: > On 04/23/2015 11:03 PM, Kirill A. Shutemov wrote: > >We're going to allow mapping of individual 4k pages of THP compound and > >we need a cheap way to find out how many time the compound page is > >mapped with PMD -- compound_mapcount() does this. > > > >We use the same approach as with compound page destructor and compound > >order: use space in first tail page, ->mapping this time. > > > >page_mapcount() counts both: PTE and PMD mappings of the page. > > > >Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > >Tested-by: Sasha Levin <sasha.levin@xxxxxxxxxx> > >--- > > include/linux/mm.h | 25 ++++++++++++-- > > include/linux/mm_types.h | 1 + > > include/linux/rmap.h | 4 +-- > > mm/debug.c | 5 ++- > > mm/huge_memory.c | 2 +- > > mm/hugetlb.c | 4 +-- > > mm/memory.c | 2 +- > > mm/migrate.c | 2 +- > > mm/page_alloc.c | 14 ++++++-- > > mm/rmap.c | 87 +++++++++++++++++++++++++++++++++++++----------- > > 10 files changed, 114 insertions(+), 32 deletions(-) > > > >diff --git a/include/linux/mm.h b/include/linux/mm.h > >index dad667d99304..33cb3aa647a6 100644 > >--- a/include/linux/mm.h > >+++ b/include/linux/mm.h > >@@ -393,6 +393,19 @@ static inline int is_vmalloc_or_module_addr(const void *x) > > > > extern void kvfree(const void *addr); > > > >+static inline atomic_t *compound_mapcount_ptr(struct page *page) > >+{ > >+ return &page[1].compound_mapcount; > >+} > >+ > >+static inline int compound_mapcount(struct page *page) > >+{ > >+ if (!PageCompound(page)) > >+ return 0; > >+ page = compound_head(page); > >+ return atomic_read(compound_mapcount_ptr(page)) + 1; > >+} > >+ > > /* > > * The atomic page->_mapcount, starts from -1: so that transitions > > * both from it and to it can be tracked, using atomic_inc_and_test > > What's not shown here is the implementation of page_mapcount_reset() that's > unchanged... is that correct from all callers? Looks like page_mapcount_reset() is mostly use to deal with PageBuddy() and such. We don't have this kind of tricks for compound_mapcount. > >@@ -405,8 +418,16 @@ static inline void page_mapcount_reset(struct page *page) > > > > static inline int page_mapcount(struct page *page) > > { > >+ int ret; > > VM_BUG_ON_PAGE(PageSlab(page), page); > >- return atomic_read(&page->_mapcount) + 1; > >+ ret = atomic_read(&page->_mapcount) + 1; > >+ /* > >+ * Positive compound_mapcount() offsets ->_mapcount in every page by > >+ * one. Let's substract it here. > >+ */ > > This could use some more detailed explanation, or at least pointers to the > relevant rmap functions. Also in commit message. Okay. Will do. > > >+ if (compound_mapcount(page)) > >+ ret += compound_mapcount(page) - 1; > > This looks like it could uselessly duplicate-inline the code for > compound_mapcount(). It has atomics and smp_rmb() so I'm not sure if the > compiler can just "squash it". Good point. I'll rework this. > > On the other hand, a simple atomic read that was page_mapcount() has turned > into multiple atomic reads and flag checks. What about the stability of the > whole result? Are all callers ok? (maybe a later page deals with it). Urghh.. I'll look into this. -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>