Re: [PATCH v2 7/7] mm, swap: simplify folio swap allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/25/25 at 02:02am, Kairui Song wrote:
......snip...
> @@ -1265,20 +1249,68 @@ swp_entry_t folio_alloc_swap(struct folio *folio)
>  			goto start_over;
>  	}
>  	spin_unlock(&swap_avail_lock);
> -out_failed:
> +	return false;
> +}
> +
> +/**
> + * folio_alloc_swap - allocate swap space for a folio
> + * @folio: folio we want to move to swap
> + * @gfp: gfp mask for shadow nodes
> + *
> + * Allocate swap space for the folio and add the folio to the
> + * swap cache.
> + *
> + * Context: Caller needs to hold the folio lock.
> + * Return: Whether the folio was added to the swap cache.

If only returning on whether folio being added or not, it's better to
return bool value for now. Anyway, this is trivial. This whole
patch looks good to me.

Reviewed-by: Baoquan He <bhe@xxxxxxxxxx>

> + */
> +int folio_alloc_swap(struct folio *folio, gfp_t gfp)
> +{
> +	unsigned int order = folio_order(folio);
> +	unsigned int size = 1 << order;
> +	swp_entry_t entry = {};
> +
> +	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> +	VM_BUG_ON_FOLIO(!folio_test_uptodate(folio), folio);
> +
> +	/*
> +	 * Should not even be attempting large allocations when huge
> +	 * page swap is disabled. Warn and fail the allocation.
> +	 */
> +	if (order && (!IS_ENABLED(CONFIG_THP_SWAP) || size > SWAPFILE_CLUSTER)) {
> +		VM_WARN_ON_ONCE(1);
> +		return -EINVAL;
> +	}
> +
> +	local_lock(&percpu_swap_cluster.lock);
> +	if (swap_alloc_fast(&entry, SWAP_HAS_CACHE, order))
> +		goto out_alloced;
> +	if (swap_alloc_slow(&entry, SWAP_HAS_CACHE, order))
> +		goto out_alloced;
>  	local_unlock(&percpu_swap_cluster.lock);
> -	return entry;
> +	return -ENOMEM;
>  
>  out_alloced:
>  	local_unlock(&percpu_swap_cluster.lock);
> -	if (mem_cgroup_try_charge_swap(folio, entry)) {
> -		put_swap_folio(folio, entry);
> -		entry.val = 0;
> -	} else {
> -		atomic_long_sub(size, &nr_swap_pages);
> -	}
> +	if (mem_cgroup_try_charge_swap(folio, entry))
> +		goto out_free;
>  
> -	return entry;
> +	/*
> +	 * XArray node allocations from PF_MEMALLOC contexts could
> +	 * completely exhaust the page allocator. __GFP_NOMEMALLOC
> +	 * stops emergency reserves from being allocated.
> +	 *
> +	 * TODO: this could cause a theoretical memory reclaim
> +	 * deadlock in the swap out path.
> +	 */
> +	if (add_to_swap_cache(folio, entry, gfp | __GFP_NOMEMALLOC, NULL))
> +		goto out_free;
> +
> +	atomic_long_sub(size, &nr_swap_pages);
> +	return 0;
> +
> +out_free:
> +	put_swap_folio(folio, entry);
> +	return -ENOMEM;
>  }
>  
>  static struct swap_info_struct *_swap_info_get(swp_entry_t entry)
....snip....





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux