On Friday, 28 May 2021 6:19:04 AM AEST Peter Xu wrote: > This patch introduces a very special swap-like pte for file-backed memories. > > Currently it's only defined for x86_64 only, but as long as any arch that > can properly define the UFFD_WP_SWP_PTE_SPECIAL value as requested, it > should conceptually work too. > > We will use this special pte to arm the ptes that got either unmapped or > swapped out for a file-backed region that was previously wr-protected. This > special pte could trigger a page fault just like swap entries, and as long > as the page fault will satisfy pte_none()==false && pte_present()==false. > > Then we can revive the special pte into a normal pte backed by the page > cache. > > This idea is greatly inspired by Hugh and Andrea in the discussion, which is > referenced in the links below. > > The other idea (from Hugh) is that we use swp_type==1 and swp_offset=0 as > the special pte. The current solution (as pointed out by Andrea) is > slightly preferred in that we don't even need swp_entry_t knowledge at all > in trapping these accesses. Meanwhile, we also reuse _PAGE_SWP_UFFD_WP > from the anonymous swp entries. So to confirm my understanding the reason you use this special swap pte instead of a new swp_type is that you only need the fault and have no extra information that needs storing in the pte? Personally I think it might be better to define a new swp_type for this rather than introducing a new arch-specific concept. swp_type entries are portable so wouldn't need extra arch-specific bits defined. And as I understand things not all architectures (eg. ARM) have spare bits in their swap entry encoding anyway so would have to reserve a bit specifically for this which would be less efficient than using a swp_type. Anyway it seems I missed the initial discussion so don't have a strong opinion here, mainly just wanted to check my understanding of what's required and how these special entries work. > This patch only introduces the special pte and its operators. It's not yet > applied to have any functional difference. > > Link: https://lore.kernel.org/lkml/20201126222359.8120-1-peterx@xxxxxxxxxx/ > Link: https://lore.kernel.org/lkml/20201130230603.46187-1-peterx@xxxxxxxxxx/ > Suggested-by: Andrea Arcangeli <aarcange@xxxxxxxxxx> > Suggested-by: Hugh Dickins <hughd@xxxxxxxxxx> > Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> > --- > arch/x86/include/asm/pgtable.h | 28 ++++++++++++++++++++++++++++ > include/asm-generic/pgtable_uffd.h | 3 +++ > include/linux/userfaultfd_k.h | 21 +++++++++++++++++++++ > 3 files changed, 52 insertions(+) > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index b1099f2d9800..9781ba2da049 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -1329,6 +1329,34 @@ static inline pmd_t pmd_swp_clear_soft_dirty(pmd_t > pmd) #endif > > #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP > + > +/* > + * This is a very special swap-like pte that marks this pte as > "wr-protected" + * by userfaultfd-wp. It should only exist for file-backed > memory where the + * page (previously got wr-protected) has been unmapped > or swapped out. + * > + * For anonymous memories, the userfaultfd-wp _PAGE_SWP_UFFD_WP bit is kept > + * along with a real swp entry instead. > + * > + * Let's make some rules for this special pte: > + * > + * (1) pte_none()==false, so that it'll not trigger a missing page fault. > + * > + * (2) pte_present()==false, so that it's recognized as swap (is_swap_pte). > + * > + * (3) pte_swp_uffd_wp()==true, so it can be tested just like a swap pte > that + * contains a valid swap entry, so that we can check a swap pte > always + * using "is_swap_pte() && pte_swp_uffd_wp()" without caring > about whether + * there's one swap entry inside of the pte. > + * > + * (4) It should not be a valid swap pte anywhere, so that when we see this > pte + * we know it does not contain a swap entry. > + * > + * For x86, the simplest special pte which satisfies all of above should be > the + * pte with only _PAGE_SWP_UFFD_WP bit set (where > swp_type==swp_offset==0). + */ > +#define UFFD_WP_SWP_PTE_SPECIAL __pte(_PAGE_SWP_UFFD_WP) > + > static inline pte_t pte_swp_mkuffd_wp(pte_t pte) > { > return pte_set_flags(pte, _PAGE_SWP_UFFD_WP); > diff --git a/include/asm-generic/pgtable_uffd.h > b/include/asm-generic/pgtable_uffd.h index 828966d4c281..95e9811ce9d1 > 100644 > --- a/include/asm-generic/pgtable_uffd.h > +++ b/include/asm-generic/pgtable_uffd.h > @@ -2,6 +2,9 @@ > #define _ASM_GENERIC_PGTABLE_UFFD_H > > #ifndef CONFIG_HAVE_ARCH_USERFAULTFD_WP > + > +#define UFFD_WP_SWP_PTE_SPECIAL __pte(0) > + > static __always_inline int pte_uffd_wp(pte_t pte) > { > return 0; > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h > index 331d2ccf0bcc..93f932b53a71 100644 > --- a/include/linux/userfaultfd_k.h > +++ b/include/linux/userfaultfd_k.h > @@ -145,6 +145,17 @@ extern int userfaultfd_unmap_prep(struct vm_area_struct > *vma, extern void userfaultfd_unmap_complete(struct mm_struct *mm, > struct list_head *uf); > > +static inline pte_t pte_swp_mkuffd_wp_special(struct vm_area_struct *vma) > +{ > + WARN_ON_ONCE(vma_is_anonymous(vma)); > + return UFFD_WP_SWP_PTE_SPECIAL; > +} > + > +static inline bool pte_swp_uffd_wp_special(pte_t pte) > +{ > + return pte_same(pte, UFFD_WP_SWP_PTE_SPECIAL); > +} > + > #else /* CONFIG_USERFAULTFD */ > > /* mm helpers */ > @@ -234,6 +245,16 @@ static inline void userfaultfd_unmap_complete(struct > mm_struct *mm, { > } > > +static inline pte_t pte_swp_mkuffd_wp_special(struct vm_area_struct *vma) > +{ > + return __pte(0); > +} > + > +static inline bool pte_swp_uffd_wp_special(pte_t pte) > +{ > + return false; > +} > + > #endif /* CONFIG_USERFAULTFD */ > > #endif /* _LINUX_USERFAULTFD_K_H */