Re: [RFC PATCH 4/4] mm: Add merge page notifier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2019-02-11 at 14:40 +0800, Aaron Lu wrote:
> On 2019/2/5 2:15, Alexander Duyck wrote:
> > From: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx>
> > 
> > Because the implementation was limiting itself to only providing hints on
> > pages huge TLB order sized or larger we introduced the possibility for free
> > pages to slip past us because they are freed as something less then
> > huge TLB in size and aggregated with buddies later.
> > 
> > To address that I am adding a new call arch_merge_page which is called
> > after __free_one_page has merged a pair of pages to create a higher order
> > page. By doing this I am able to fill the gap and provide full coverage for
> > all of the pages huge TLB order or larger.
> > 
> > Signed-off-by: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx>
> > ---
> >  arch/x86/include/asm/page.h |   12 ++++++++++++
> >  arch/x86/kernel/kvm.c       |   28 ++++++++++++++++++++++++++++
> >  include/linux/gfp.h         |    4 ++++
> >  mm/page_alloc.c             |    2 ++
> >  4 files changed, 46 insertions(+)
> > 
> > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
> > index 4487ad7a3385..9540a97c9997 100644
> > --- a/arch/x86/include/asm/page.h
> > +++ b/arch/x86/include/asm/page.h
> > @@ -29,6 +29,18 @@ static inline void arch_free_page(struct page *page, unsigned int order)
> >  	if (static_branch_unlikely(&pv_free_page_hint_enabled))
> >  		__arch_free_page(page, order);
> >  }
> > +
> > +struct zone;
> > +
> > +#define HAVE_ARCH_MERGE_PAGE
> > +void __arch_merge_page(struct zone *zone, struct page *page,
> > +		       unsigned int order);
> > +static inline void arch_merge_page(struct zone *zone, struct page *page,
> > +				   unsigned int order)
> > +{
> > +	if (static_branch_unlikely(&pv_free_page_hint_enabled))
> > +		__arch_merge_page(zone, page, order);
> > +}
> >  #endif
> >  
> >  #include <linux/range.h>
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > index 09c91641c36c..957bb4f427bb 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -785,6 +785,34 @@ void __arch_free_page(struct page *page, unsigned int order)
> >  		       PAGE_SIZE << order);
> >  }
> >  
> > +void __arch_merge_page(struct zone *zone, struct page *page,
> > +		       unsigned int order)
> > +{
> > +	/*
> > +	 * The merging logic has merged a set of buddies up to the
> > +	 * KVM_PV_UNUSED_PAGE_HINT_MIN_ORDER. Since that is the case, take
> > +	 * advantage of this moment to notify the hypervisor of the free
> > +	 * memory.
> > +	 */
> > +	if (order != KVM_PV_UNUSED_PAGE_HINT_MIN_ORDER)
> > +		return;
> > +
> > +	/*
> > +	 * Drop zone lock while processing the hypercall. This
> > +	 * should be safe as the page has not yet been added
> > +	 * to the buddy list as of yet and all the pages that
> > +	 * were merged have had their buddy/guard flags cleared
> > +	 * and their order reset to 0.
> > +	 */
> > +	spin_unlock(&zone->lock);
> > +
> > +	kvm_hypercall2(KVM_HC_UNUSED_PAGE_HINT, page_to_phys(page),
> > +		       PAGE_SIZE << order);
> > +
> > +	/* reacquire lock and resume freeing memory */
> > +	spin_lock(&zone->lock);
> > +}
> > +
> >  #ifdef CONFIG_PARAVIRT_SPINLOCKS
> >  
> >  /* Kick a cpu by its apicid. Used to wake up a halted vcpu */
> > diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> > index fdab7de7490d..4746d5560193 100644
> > --- a/include/linux/gfp.h
> > +++ b/include/linux/gfp.h
> > @@ -459,6 +459,10 @@ static inline struct zonelist *node_zonelist(int nid, gfp_t flags)
> >  #ifndef HAVE_ARCH_FREE_PAGE
> >  static inline void arch_free_page(struct page *page, int order) { }
> >  #endif
> > +#ifndef HAVE_ARCH_MERGE_PAGE
> > +static inline void
> > +arch_merge_page(struct zone *zone, struct page *page, int order) { }
> > +#endif
> >  #ifndef HAVE_ARCH_ALLOC_PAGE
> >  static inline void arch_alloc_page(struct page *page, int order) { }
> >  #endif
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index c954f8c1fbc4..7a1309b0b7c5 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -913,6 +913,8 @@ static inline void __free_one_page(struct page *page,
> >  		page = page + (combined_pfn - pfn);
> >  		pfn = combined_pfn;
> >  		order++;
> > +
> > +		arch_merge_page(zone, page, order);
> 
> Not a proper place AFAICS.
> 
> Assume we have an order-8 page being sent here for merge and its order-8
> buddy is also free, then order++ became 9 and arch_merge_page() will do
> the hint to host on this page as an order-9 page, no problem so far.
> Then the next round, assume the now order-9 page's buddy is also free,
> order++ will become 10 and arch_merge_page() will again hint to host on
> this page as an order-10 page. The first hint to host became redundant.

Actually the problem is even worse the other way around. My concern was
pages being incrementally freed.

With this setup I can catch when we have crossed the threshold from
order 8 to 9, and specifically for that case provide the hint. This
allows me to ignore orders above and below 9.

If I move the hint to the spot after the merging I have no way of
telling if I have hinted the page as a lower order or not. As such I
will hint if it is merged up to orders 9 or greater. So for example if
it merges up to order 9 and stops there then done_merging will report
an order 9 page, then if another page is freed and merged with this up
to order 10 you would be hinting on order 10. By placing the function
here I can guarantee that no more than 1 hint is provided per 2MB page.

> I think the proper place is after the done_merging tag.
> 
> BTW, with arch_merge_page() at the proper place, I don't think patch3/4
> is necessary - any freed page will go through merge anyway, we won't
> lose any hint opportunity. Or do I miss anything?

You can refer to my comment above. What I want to avoid is us hinting a
page multiple times if we aren't using MAX_ORDER - 1 as the limit. What
I am avoiding by placing this where I did is us doing a hint on orders
greater than our target hint order. So with this way I only perform one
hint per 2MB page, otherwise I would be performing multiple hints per
2MB page as every order above that would also trigger hints.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux