Re: [PATCH v8 7/8] hugetlb: batch TLB flushes when freeing vmemmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/21/23 11:20, Jane Chu wrote:
> Hi, Mike,
> 
> On 10/18/2023 7:31 PM, Mike Kravetz wrote:
> > From: Joao Martins <joao.m.martins@xxxxxxxxxx>
> > 
> > Now that a list of pages is deduplicated at once, the TLB
> > flush can be batched for all vmemmap pages that got remapped.
> > 
> [..]
> 
> > @@ -719,19 +737,28 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l
> >   	list_for_each_entry(folio, folio_list, lru) {
> >   		int ret = __hugetlb_vmemmap_optimize(h, &folio->page,
> > -								&vmemmap_pages);
> > +						&vmemmap_pages,
> > +						VMEMMAP_REMAP_NO_TLB_FLUSH);
> >   		/*
> >   		 * Pages to be freed may have been accumulated.  If we
> >   		 * encounter an ENOMEM,  free what we have and try again.
> > +		 * This can occur in the case that both spliting fails
> > +		 * halfway and head page allocation also failed. In this
> > +		 * case __hugetlb_vmemmap_optimize() would free memory
> > +		 * allowing more vmemmap remaps to occur.
> >   		 */
> >   		if (ret == -ENOMEM && !list_empty(&vmemmap_pages)) {
> > +			flush_tlb_all();
> >   			free_vmemmap_page_list(&vmemmap_pages);
> >   			INIT_LIST_HEAD(&vmemmap_pages);
> > -			__hugetlb_vmemmap_optimize(h, &folio->page, &vmemmap_pages);
> > +			__hugetlb_vmemmap_optimize(h, &folio->page,
> > +						&vmemmap_pages,
> > +						VMEMMAP_REMAP_NO_TLB_FLUSH);
> >   		}
> >   	}
> > +	flush_tlb_all();
> 
> It seems that if folio_list is empty, we could spend a tlb flush here.
> perhaps it's worth to check against empty list up front and return ?

Good point.

hugetlb_vmemmap_optimize_folios is only called from
prep_and_add_allocated_folios and prep_and_add_bootmem_folios.  I
previously thought about adding a check like the following at the
beginning of those routines.

	if (list_empty(folio_list))
		return;

However that seemed like over optimizing.  But, such a check would avoid
the tlb flush as you point out above as well as an unnecessary
hugetlb_lock lock/unlock cycle.

We can add something like this as an optimization.  I am not too concerned
about this right now because these these routines are generally called very
infrequently as the result of a user request to change the size of hugetlb
pools.
-- 
Mike Kravetz




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux