On Fri, 2022-10-14 at 17:32 +0200, Peter Zijlstra wrote: > On Thu, Sep 29, 2022 at 03:29:13PM -0700, Rick Edgecombe wrote: > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 8cd413c5a329..fef14ab3abcb 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -981,13 +981,25 @@ void free_compound_page(struct page *page); > > * servicing faults for write access. In the normal case, do > > always want > > * pte_mkwrite. But get_user_pages can cause write faults for > > mappings > > * that do not have writing enabled, when used by > > access_process_vm. > > + * > > + * If a vma is shadow stack (a type of writable memory), mark the > > pte shadow > > + * stack. > > */ > > +#ifndef maybe_mkwrite > > static inline pte_t maybe_mkwrite(pte_t pte, struct > > vm_area_struct *vma) > > { > > - if (likely(vma->vm_flags & VM_WRITE)) > > + if (!(vma->vm_flags & VM_WRITE)) > > + goto out; > > + > > + if (vma->vm_flags & VM_SHADOW_STACK) > > + pte = pte_mkwrite_shstk(pte); > > + else > > pte = pte_mkwrite(pte); > > + > > +out: > > return pte; > > } > > +#endif > > Why the #ifndef guard? There is no other implementation, nor does > this > patch introduce one. Oh yea, this series used to add another one, but I forgot to remove the guards. Thanks. > > Also, wouldn't it be simpler to write it like: > > static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct > *vma) > { > if (!(vma->vm_flags & VM_WRITE)) > return pte; > > if (vma->vm_flags & VM_SHADOW_STACK) > return pte_mkwrite_shstk(pte); > > return pte_mkwrite(pte); > } Yep, that looks better. Thanks.