On Fri, Feb 15, 2019 at 01:29:44AM +0300, Kirill A. Shutemov wrote: > On Thu, Feb 14, 2019 at 04:30:04PM +0300, Kirill A. Shutemov wrote: > > On Tue, Feb 12, 2019 at 10:34:54AM -0800, Matthew Wilcox wrote: > > > Transparent Huge Pages are currently stored in i_pages as pointers to > > > consecutive subpages. This patch changes that to storing consecutive > > > pointers to the head page in preparation for storing huge pages more > > > efficiently in i_pages. > > > > > > Large parts of this are "inspired" by Kirill's patch > > > https://lore.kernel.org/lkml/20170126115819.58875-2-kirill.shutemov@xxxxxxxxxxxxxxx/ > > > > > > Signed-off-by: Matthew Wilcox <willy@xxxxxxxxxxxxx> > > > > I believe I found few missing pieces: > > > > - page_cache_delete_batch() will blow up on > > > > VM_BUG_ON_PAGE(page->index + HPAGE_PMD_NR - tail_pages > > != pvec->pages[i]->index, page); > > > > - migrate_page_move_mapping() has to be converted too. > > Other missing pieces are memfd_wait_for_pins() and memfd_tag_pins() > We need to call page_mapcount() for tail pages there. @@ -39,6 +39,7 @@ static void memfd_tag_pins(struct xa_state *xas) xas_for_each(xas, page, ULONG_MAX) { if (xa_is_value(page)) continue; + page = find_subpage(page, xas.xa_index); if (page_count(page) - page_mapcount(page) > 1) xas_set_mark(xas, MEMFD_TAG_PINNED); @@ -88,6 +89,7 @@ static int memfd_wait_for_pins(struct address_space *mapping) bool clear = true; if (xa_is_value(page)) continue; + page = find_subpage(page, xas.xa_index); if (page_count(page) - page_mapcount(page) != 1) { /* * On the last scan, we clean up all those tags