On Tue, Jun 20, 2023 at 12:54:25PM -0700, Hugh Dickins wrote: > On Tue, 20 Jun 2023, Jason Gunthorpe wrote: > > On Tue, Jun 20, 2023 at 12:47:54AM -0700, Hugh Dickins wrote: > > > Add powerpc-specific pte_free_defer(), to call pte_free() via call_rcu(). > > > pte_free_defer() will be called inside khugepaged's retract_page_tables() > > > loop, where allocating extra memory cannot be relied upon. This precedes > > > the generic version to avoid build breakage from incompatible pgtable_t. > > > > > > This is awkward because the struct page contains only one rcu_head, but > > > that page may be shared between PTE_FRAG_NR pagetables, each wanting to > > > use the rcu_head at the same time: account concurrent deferrals with a > > > heightened refcount, only the first making use of the rcu_head, but > > > re-deferring if more deferrals arrived during its grace period. > > > > You didn't answer my question why we can't just move the rcu to the > > actual free page? > > I thought that I had answered it, perhaps not to your satisfaction: > > https://lore.kernel.org/linux-mm/9130acb-193-6fdd-f8df-75766e663978@xxxxxxxxxx/ > > My conclusion then was: > Not very good reasons: good enough, or can you supply a better patch? Oh, I guess I didn't read that email as answering the question.. I was saying to make pte_fragment_free() unconditionally do the RCU. It is the only thing that uses the page->rcu_head, and it means PPC would double RCU the final free on the TLB path, but that is probably OK for now. This means pte_free_defer() won't do anything special on PPC as PPC will always RCU free these things, this address the defer concern too, I think. Overall it is easier to reason about. I looked at fixing the TLB stuff to avoid the double rcu but quickly got scared that ppc was using a kmem_cache to allocate other page table sizes so there is not a reliable struct page to get a rcu_head from. This looks like the main challenge for ppc... We'd have to teach the tlb code to not do its own RCU stuff for table levels that the arch is already RCU freeing - and that won't get us to full RCU freeing on PPC. Anyhow, this is a full version of what I was thinking: diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c index 20652daa1d7e3a..b5dcd0f27fc115 100644 --- a/arch/powerpc/mm/pgtable-frag.c +++ b/arch/powerpc/mm/pgtable-frag.c @@ -106,6 +106,21 @@ pte_t *pte_fragment_alloc(struct mm_struct *mm, int kernel) return __alloc_for_ptecache(mm, kernel); } +static void pgtable_free_cb(struct rcu_head *head) +{ + struct page *page = container_of(head, struct page, rcu_head); + + pgtable_pte_page_dtor(page); + __free_page(page); +} + +static void pgtable_free_cb_kernel(struct rcu_head *head) +{ + struct page *page = container_of(head, struct page, rcu_head); + + __free_page(page); +} + void pte_fragment_free(unsigned long *table, int kernel) { struct page *page = virt_to_page(table); @@ -115,8 +130,13 @@ void pte_fragment_free(unsigned long *table, int kernel) BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); if (atomic_dec_and_test(&page->pt_frag_refcount)) { + /* + * Always RCU free pagetable memory. rcu_head overlaps with lru + * which is no longer in use by the time the table is freed. + */ if (!kernel) - pgtable_pte_page_dtor(page); - __free_page(page); + call_rcu(&page->rcu_head, pgtable_free_cb); + else + call_rcu(&page->rcu_head, pgtable_free_cb_kernel); } }