On Wed, Jun 21, 2023 at 07:36:11PM -0700, Hugh Dickins wrote: > [PATCH v3 05/12] powerpc: add pte_free_defer() for pgtables sharing page > > Add powerpc-specific pte_free_defer(), to free table page via call_rcu(). > pte_free_defer() will be called inside khugepaged's retract_page_tables() > loop, where allocating extra memory cannot be relied upon. This precedes > the generic version to avoid build breakage from incompatible pgtable_t. > > This is awkward because the struct page contains only one rcu_head, but > that page may be shared between PTE_FRAG_NR pagetables, each wanting to > use the rcu_head at the same time. But powerpc never reuses a fragment > once it has been freed: so mark the page Active in pte_free_defer(), > before calling pte_fragment_free() directly; and there call_rcu() to > pte_free_now() when last fragment is freed and the page is PageActive. > > Suggested-by: Jason Gunthorpe <jgg@xxxxxxxx> > Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> > --- > arch/powerpc/include/asm/pgalloc.h | 4 ++++ > arch/powerpc/mm/pgtable-frag.c | 29 ++++++++++++++++++++++++++--- > 2 files changed, 30 insertions(+), 3 deletions(-) > > diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h > index 3360cad78ace..3a971e2a8c73 100644 > --- a/arch/powerpc/include/asm/pgalloc.h > +++ b/arch/powerpc/include/asm/pgalloc.h > @@ -45,6 +45,10 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage) > pte_fragment_free((unsigned long *)ptepage, 0); > } > > +/* arch use pte_free_defer() implementation in arch/powerpc/mm/pgtable-frag.c */ > +#define pte_free_defer pte_free_defer > +void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable); > + > /* > * Functions that deal with pagetables that could be at any level of > * the table need to be passed an "index_size" so they know how to > diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c > index 20652daa1d7e..0c6b68130025 100644 > --- a/arch/powerpc/mm/pgtable-frag.c > +++ b/arch/powerpc/mm/pgtable-frag.c > @@ -106,6 +106,15 @@ pte_t *pte_fragment_alloc(struct mm_struct *mm, int kernel) > return __alloc_for_ptecache(mm, kernel); > } > > +static void pte_free_now(struct rcu_head *head) > +{ > + struct page *page; > + > + page = container_of(head, struct page, rcu_head); > + pgtable_pte_page_dtor(page); > + __free_page(page); > +} > + > void pte_fragment_free(unsigned long *table, int kernel) > { > struct page *page = virt_to_page(table); > @@ -115,8 +124,22 @@ void pte_fragment_free(unsigned long *table, int kernel) > > BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); > if (atomic_dec_and_test(&page->pt_frag_refcount)) { > - if (!kernel) > - pgtable_pte_page_dtor(page); > - __free_page(page); > + if (kernel) > + __free_page(page); > + else if (TestClearPageActive(page)) > + call_rcu(&page->rcu_head, pte_free_now); > + else > + pte_free_now(&page->rcu_head); > } > } > + > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > +void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable) > +{ > + struct page *page; > + > + page = virt_to_page(pgtable); > + SetPageActive(page); > + pte_fragment_free((unsigned long *)pgtable, 0); > +} > +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ Yes, this makes sense to me, very simple.. I always for get these details but atomic_dec_and_test() is a release? So the SetPageActive is guarenteed to be visible in another thread that reaches 0? Thanks, Jason