Re: [RFC PATCH 1/3] mm: hugetlb: Simplify logic in dequeue_hugetlb_folio_vma()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 11, 2024 at 11:22:36PM +0000, Ackerley Tng wrote:
> Replace arguments avoid_reserve and chg in dequeue_hugetlb_folio_vma()
> so dequeue_hugetlb_folio_vma() is more understandable.
> 
> The new argument, use_hstate_resv, indicates whether the folio to be
> dequeued should be taken from reservations in hstate.
> 
> If use_hstate_resv is true, the folio to be dequeued should be taken
> from reservations in hstate and hence h->resv_huge_pages is
> decremented, and the folio is marked so that the reservation is
> restored.
> 
> If use_hstate_resv is false, then a folio needs to be taken from the
> pool and hence there must exist available_huge_pages(h), failing
> which, goto err.
> 
> The bool use_hstate_resv can be reused within
> dequeue_hugetlb_folio_vma()'s caller, alloc_hugetlb_folio().
> 
> No functional changes are intended.
> 
> As proof, the original two if conditions
> 
> !vma_has_reserves(vma, chg) && !available_huge_pages(h)
> 
> and
> 
> avoid_reserve && !available_huge_pages(h)
> 
> can be combined into
> 
> (avoid_reserve || !vma_has_reserves(vma, chg))
> && !available_huge_pages(h).
> 
> Applying de Morgan's theorem on
> 
> avoid_reserve || !vma_has_reserves(vma, chg)
> 
> yields
> 
> !avoid_reserve && vma_has_reserves(vma, chg),
> 
> hence the simplification is correct.

Some spacing is definitely good.. as Sean pointed out.

> 
> Signed-off-by: Ackerley Tng <ackerleytng@xxxxxxxxxx>
> ---
>  mm/hugetlb.c | 33 +++++++++++----------------------
>  1 file changed, 11 insertions(+), 22 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 190fa05635f4..73165c670739 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1281,8 +1281,9 @@ static bool vma_has_reserves(struct vm_area_struct *vma, long chg)
>  	}
> 
>  	/*
> -	 * Only the process that called mmap() has reserves for
> -	 * private mappings.
> +	 * Only the process that called mmap() has reserves for private
> +	 * mappings. A child process with MAP_PRIVATE mappings created by their
> +	 * parent have no page reserves.
>  	 */
>  	if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) {
>  		/*
> @@ -1394,8 +1395,7 @@ static unsigned long available_huge_pages(struct hstate *h)
> 
>  static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
>  				struct vm_area_struct *vma,
> -				unsigned long address, int avoid_reserve,
> -				long chg)
> +				unsigned long address, bool use_hstate_resv)

Here "avoid_reserve" + "chg" is indeed confusing, especially with the prior
"if (avoid_reserve) gbl_chg = 1;".  The new flag can make it slightly
easier to understand indeed for dequeue_hugetlb_folio_vma() alone.

I still feel like there can be something more to be cleaned up here even
after your patch 2-3, but I suppose this could be seen as a small-step
forward, considering one patch change will be harder to review.  Feel free
to take:

Acked-by: Peter Xu <peterx@xxxxxxxxxx>

>  {
>  	struct folio *folio = NULL;
>  	struct mempolicy *mpol;
> @@ -1403,16 +1403,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
>  	nodemask_t *nodemask;
>  	int nid;
> 
> -	/*
> -	 * A child process with MAP_PRIVATE mappings created by their parent
> -	 * have no page reserves. This check ensures that reservations are
> -	 * not "stolen". The child may still get SIGKILLed
> -	 */
> -	if (!vma_has_reserves(vma, chg) && !available_huge_pages(h))
> -		goto err;
> -
> -	/* If reserves cannot be used, ensure enough pages are in the pool */
> -	if (avoid_reserve && !available_huge_pages(h))
> +	if (!use_hstate_resv && !available_huge_pages(h))
>  		goto err;
> 
>  	gfp_mask = htlb_alloc_mask(h);
> @@ -1430,7 +1421,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
>  		folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask,
>  							nid, nodemask);
> 
> -	if (folio && !avoid_reserve && vma_has_reserves(vma, chg)) {
> +	if (folio && use_hstate_resv) {
>  		folio_set_hugetlb_restore_reserve(folio);
>  		h->resv_huge_pages--;
>  	}
> @@ -2973,6 +2964,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>  	struct mem_cgroup *memcg;
>  	bool deferred_reserve;
>  	gfp_t gfp = htlb_alloc_mask(h) | __GFP_RETRY_MAYFAIL;
> +	bool use_hstate_resv;
> 
>  	memcg = get_mem_cgroup_from_current();
>  	memcg_charge_ret = mem_cgroup_hugetlb_try_charge(memcg, gfp, nr_pages);
> @@ -3033,20 +3025,17 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>  	if (ret)
>  		goto out_uncharge_cgroup_reservation;
> 
> +	use_hstate_resv = !avoid_reserve && vma_has_reserves(vma, gbl_chg);
> +
>  	spin_lock_irq(&hugetlb_lock);
> -	/*
> -	 * glb_chg is passed to indicate whether or not a page must be taken
> -	 * from the global free pool (global change).  gbl_chg == 0 indicates
> -	 * a reservation exists for the allocation.
> -	 */
> -	folio = dequeue_hugetlb_folio_vma(h, vma, addr, avoid_reserve, gbl_chg);
> +	folio = dequeue_hugetlb_folio_vma(h, vma, addr, use_hstate_resv);
>  	if (!folio) {
>  		spin_unlock_irq(&hugetlb_lock);
>  		folio = alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr);
>  		if (!folio)
>  			goto out_uncharge_cgroup;
>  		spin_lock_irq(&hugetlb_lock);
> -		if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
> +		if (use_hstate_resv) {
>  			folio_set_hugetlb_restore_reserve(folio);
>  			h->resv_huge_pages--;
>  		}
> --
> 2.47.0.rc1.288.g06298d1525-goog
> 

-- 
Peter Xu





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux