On Wed, Feb 14, 2018 at 05:51:41AM -0800, Matthew Wilcox wrote: > On Fri, Feb 09, 2018 at 07:26:09AM +0300, Kirill A. Shutemov wrote: > > On Thu, Feb 08, 2018 at 01:37:43PM -0800, Matthew Wilcox wrote: > > > On Thu, Feb 08, 2018 at 12:21:00PM -0800, Matthew Wilcox wrote: > > > > Now that I think about it, though, perhaps the simplest solution is not > > > > to worry about checking whether _mapcount has saturated, and instead when > > > > adding a new mmap, check whether this task already has it mapped 10 times. > > > > If so, refuse the mapping. > > > > > > That turns out to be quite easy. Comments on this approach? > > > > This *may* break some remap_file_pages() users. > > We have some?! ;-) I can't prove otherwise :) > I don't understand the use case where they want to map the same page of > a file multiple times into the same process. I mean, yes, of course, > they might ask for it, but I don't understand why they would. Do you > have any insight here? Some form of data deduplication? Like having repeating chunks stored once on presistent storage and page cache, but put into memory in "uncompressed" form. It's not limited to remap_file_pages(). Plain mmap() can be used for this too. -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>