"Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> writes: > [ text/plain ] > Naive approach: on mapping/unmapping the page as compound we update > ->_mapcount on each 4k page. That's not efficient, but it's not obvious > how we can optimize this. We can look into optimization later. > > PG_double_map optimization doesn't work for file pages since lifecycle > of file pages is different comparing to anon pages: file page can be > mapped again at any time. > Can you explain this more ?. We added PG_double_map so that we can keep page_remove_rmap simpler. So if it isn't a compound page we still can do if (!atomic_add_negative(-1, &page->_mapcount)) I am trying to understand why we can't use that with file pages ? -aneesh -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html