On Thu, Sep 23, 2021 at 05:51:04PM +0100, Joao Martins wrote: > On 8/31/21 6:05 PM, Jason Gunthorpe wrote: > >> Switching to similar iteration logic to unpin would look something like > >> this (still untested): > >> > >> for_each_compound_range(index, &page, npages, head, refs) { > >> pgmap = get_dev_pagemap(pfn + *nr, pgmap); > > > > I recall talking to DanW about this and we agreed it was unnecessary > > here to hold the pgmap and should be deleted. > > Yeap, I remember that conversation[0]. It was a long time ago, and I am > not sure what progress was made there since the last posting? Dan, any > thoughts there? > > [0] > https://lore.kernel.org/linux-mm/161604050866.1463742.7759521510383551055.stgit@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/ I would really like to see that finished :\ > So ... if pgmap accounting was removed from gup-fast then this patch > would be a lot simpler and we could perhaps just fallback to the regular > hugepage case (THP, HugeTLB) like your suggestion at the top. See at the > end below scissors mark as the ballpark of changes. > > So far my options seem to be: 1) this patch which leverages the existing > iteration logic or 2) switching to for_each_compound_range() -- see my previous > reply 3) waiting for Dan to remove @pgmap accounting in gup-fast and use > something similar to below scissors mark. > > What do you think would be the best course of action? I still think the basic algorithm should be to accumulate physicaly contiguous addresses when walking the page table and then flush them back to struct pages once we can't accumulate any more. That works for both the walkers and all the page types? If the get_dev_pagemap has to remain then it just means we have to flush before changing pagemap pointers Jason