On Tue 19-06-18 11:11:48, John Hubbard wrote: > On 06/19/2018 03:41 AM, Jan Kara wrote: > > On Tue 19-06-18 02:02:55, Matthew Wilcox wrote: > >> On Tue, Jun 19, 2018 at 10:29:49AM +0200, Jan Kara wrote: > >>> And for record, the problem with page cache pages is not only that > >>> try_to_unmap() may unmap them. It is also that page_mkclean() can > >>> write-protect them. And once PTEs are write-protected filesystems may end > >>> up doing bad things if DMA then modifies the page contents (DIF/DIX > >>> failures, data corruption, oopses). As such I don't think that solutions > >>> based on page reference count have a big chance of dealing with the > >>> problem. > >>> > >>> And your page flag approach would also need to take page_mkclean() into > >>> account. And there the issue is that until the flag is cleared (i.e., we > >>> are sure there are no writers using references from GUP) you cannot > >>> writeback the page safely which does not work well with your idea of > >>> clearing the flag only once the page is evicted from page cache (hint, page > >>> cache page cannot get evicted until it is written back). > >>> > >>> So as sad as it is, I don't see an easy solution here. > >> > >> Pages which are "got" don't need to be on the LRU list. They'll be > >> marked dirty when they're put, so we can use page->lru for fun things > >> like a "got" refcount. If we use bit 1 of page->lru for PageGot, we've > >> got 30/62 bits in the first word and a full 64 bits in the second word. > > > > Interesting idea! It would destroy the aging information for the page but > > for pages accessed through GUP references that is very much vague concept > > anyway. It might be a bit tricky as pulling a page out of LRU requires page > > lock but I don't think that's a huge problem. And page cache pages not on > > LRU exist even currently when they are under reclaim so hopefully there > > won't be too many places in MM that would need fixing up for such pages. > > This sound promising, I'll try it out! > > > > > I'm also still pondering the idea of inserting a "virtual" VMA into vma > > interval tree in the inode - as the GUP references are IMHO closest to an > > mlocked mapping - and that would achieve all the functionality we need as > > well. I just didn't have time to experiment with it. > > How would this work? Would it have the same virtual address range? And how > does it avoid the problems we've been discussing? Sorry to be a bit slow > here. :) The range covered by the virtual mapping would be the one sent to get_user_pages() to get page references. And then we would need to teach page_mkclean() to check for these virtual VMAs and block / skip / report (different situations would need different behavior) such page. But this second part is the same regardless how we identify a page that is pinned by get_user_pages(). > > And then there's the aspect that both these approaches are a bit too > > heavyweight for some get_user_pages_fast() users (e.g. direct IO) - Al Viro > > had an idea to use page lock for that path but e.g. fs/direct-io.c would have > > problems due to lock ordering constraints (filesystem ->get_block would > > suddently get called with the page lock held). But we can probably leave > > performance optimizations for phase two. > > > So I assume that phase one would be to apply this approach only to > get_user_pages_longterm. (Please let me know if that's wrong.) No, I meant phase 1 would be to apply this to all get_user_pages() flavors. Then phase 2 is to try to find a way to make get_user_pages_fast() fast again. And then in parallel to that, we also need to find a way for get_user_pages_longterm() to signal to the user pinned pages must be released soon. Because after phase 1 pinned pages will block page writeback and such system won't oops but will become unusable sooner rather than later. And again this problem needs to be solved regardless of a mechanism of identifying pinned pages. Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html