On Mon, Dec 17, 2018 at 12:57 AM Jan Kara <jack@xxxxxxx> wrote: > > On Fri 14-12-18 11:38:59, Dan Williams wrote: > > On Thu, Dec 13, 2018 at 10:11 PM John Hubbard <jhubbard@xxxxxxxxxx> wrote: > > > > > > On 12/13/18 9:21 PM, Dan Williams wrote: > > > > On Thu, Dec 13, 2018 at 7:53 PM John Hubbard <jhubbard@xxxxxxxxxx> wrote: > > > >> > > > >> On 12/12/18 4:51 PM, Dave Chinner wrote: > > > >>> On Wed, Dec 12, 2018 at 04:59:31PM -0500, Jerome Glisse wrote: > > > >>>> On Thu, Dec 13, 2018 at 08:46:41AM +1100, Dave Chinner wrote: > > > >>>>> On Wed, Dec 12, 2018 at 10:03:20AM -0500, Jerome Glisse wrote: > > > >>>>>> On Mon, Dec 10, 2018 at 11:28:46AM +0100, Jan Kara wrote: > > > >>>>>>> On Fri 07-12-18 21:24:46, Jerome Glisse wrote: > > > >>>>>>> So this approach doesn't look like a win to me over using counter in struct > > > >>>>>>> page and I'd rather try looking into squeezing HMM public page usage of > > > >>>>>>> struct page so that we can fit that gup counter there as well. I know that > > > >>>>>>> it may be easier said than done... > > > >>>>>> > > > >> > > > >> Agreed. After all the discussion this week, I'm thinking that the original idea > > > >> of a per-struct-page counter is better. Fortunately, we can do the moral equivalent > > > >> of that, unless I'm overlooking something: Jerome had another proposal that he > > > >> described, off-list, for doing that counting, and his idea avoids the problem of > > > >> finding space in struct page. (And in fact, when I responded yesterday, I initially > > > >> thought that's where he was going with this.) > > > >> > > > >> So how about this hybrid solution: > > > >> > > > >> 1. Stay with the basic RFC approach of using a per-page counter, but actually > > > >> store the counter(s) in the mappings instead of the struct page. We can use > > > >> !PageAnon and page_mapping to look up all the mappings, stash the dma_pinned_count > > > >> there. So the total pinned count is scattered across mappings. Probably still need > > > >> a PageDmaPinned bit. > > > > > > > > How do you safely look at page->mapping from the get_user_pages_fast() > > > > path? You'll be racing invalidation disconnecting the page from the > > > > mapping. > > > > > > > > > > I don't have an answer for that, so maybe the page->mapping idea is dead already. > > > > > > So in that case, there is still one more way to do all of this, which is to > > > combine ZONE_DEVICE, HMM, and gup/dma information in a per-page struct, and get > > > there via basically page->private, more or less like this: > > > > If we're going to allocate something new out-of-line then maybe we > > should go even further to allow for a page "proxy" object to front a > > real struct page. This idea arose from Dave Hansen as I explained to > > him the dax-reflink problem, and dovetails with Dave Chinner's > > suggestion earlier in this thread for dax-reflink. > > > > Have get_user_pages() allocate a proxy object that gets passed around > > to drivers. Something like a struct page pointer with bit 0 set. This > > would add a conditional branch and pointer chase to many page > > operations, like page_to_pfn(), I thought something like it would be > > unacceptable a few years ago, but then HMM went and added similar > > overhead to put_page() and nobody balked. > > > > This has the additional benefit of catching cases that might be doing > > a get_page() on a get_user_pages() result and should instead switch to > > a "ref_user_page()" (opposite of put_user_page()) as the API to take > > additional references on a get_user_pages() result. > > > > page->index and page->mapping could be overridden by similar > > attributes in the proxy, and allow an N:1 relationship of proxy > > instances to actual pages. Filesystems could generate dynamic proxies > > as well. > > > > The auxiliary information (dev_pagemap, hmm_data, etc...) moves to the > > proxy and stops polluting the base struct page which remains the > > canonical location for dirty-tracking and dma operations. > > > > The difficulties are reconciling the source of the proxies as both > > get_user_pages() and filesystem may want to be the source of the > > allocation. In the get_user_pages_fast() path we may not be able to > > ask the filesystem for the proxy, at least not without destroying the > > performance expectations of get_user_pages_fast(). > > What you describe here sounds almost like page_ext mechanism we already > have? Or do you really aim at per-pin allocated structure? Per-pin or dynamically allocated by the filesystem. The existing page_ext seems to suffer from the expectation that a page_ext exists for all pfns. The 'struct page' per pfn requirement is already painful as memory capacities grow into the terabytes, page_ext seems to just make that worse.