On Thu, Apr 07, 2022 at 09:47:16AM -0600, Logan Gunthorpe wrote: > +static void pci_p2pdma_unmap_mappings(void *data) > +{ > + struct pci_dev *pdev = data; > + struct pci_p2pdma *p2pdma = rcu_dereference_protected(pdev->p2pdma, 1); > + > + /* Ensure no new pages can be allocated in mappings */ > + p2pdma->active = false; > + synchronize_rcu(); > + > + unmap_mapping_range(p2pdma->inode->i_mapping, 0, 0, 1); > + > + /* > + * On some architectures, TLB flushes are done with call_rcu() > + * so to ensure GUP fast is done with the pages, call synchronize_rcu() > + * before freeing them. > + */ > + synchronize_rcu(); > + pci_p2pdma_free_mappings(p2pdma->inode->i_mapping); With the series from Felix getting close this should get updated to not set pte_devmap and use proper natural refcounting without any of this stuff. Jason