Re: [RFC PATCH 1/3] mm/migrate: Add folio_migrate_mapping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10 May 2021, at 19:23, Matthew Wilcox (Oracle) wrote:

> Reimplement migrate_page_move_mapping() as a wrapper around
> folio_migrate_mapping().  Saves 193 bytes of kernel text.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
> ---
>  include/linux/migrate.h |  2 +
>  mm/folio-compat.c       | 11 ++++++
>  mm/migrate.c            | 85 +++++++++++++++++++++--------------------
>  3 files changed, 57 insertions(+), 41 deletions(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 4bb4e519e3f5..a4ff65e9c1e3 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -51,6 +51,8 @@ extern int migrate_huge_page_move_mapping(struct address_space *mapping,
>  				  struct page *newpage, struct page *page);
>  extern int migrate_page_move_mapping(struct address_space *mapping,
>  		struct page *newpage, struct page *page, int extra_count);
> +int folio_migrate_mapping(struct address_space *mapping,
> +		struct folio *newfolio, struct folio *folio, int extra_count);
>  #else
>
>  static inline void putback_movable_pages(struct list_head *l) {}
> diff --git a/mm/folio-compat.c b/mm/folio-compat.c
> index d229b979b00d..25c2269655f4 100644
> --- a/mm/folio-compat.c
> +++ b/mm/folio-compat.c
> @@ -4,6 +4,7 @@
>   * eventually.
>   */
>
> +#include <linux/migrate.h>
>  #include <linux/pagemap.h>
>  #include <linux/swap.h>
>
> @@ -60,3 +61,13 @@ void mem_cgroup_uncharge(struct page *page)
>  	folio_uncharge_cgroup(page_folio(page));
>  }
>  #endif
> +
> +#ifdef CONFIG_MIGRATION
> +int migrate_page_move_mapping(struct address_space *mapping,
> +		struct page *newpage, struct page *page, int extra_count)
> +{
> +	return folio_migrate_mapping(mapping, page_folio(newpage),
> +					page_folio(page), extra_count);
> +}
> +EXPORT_SYMBOL(migrate_page_move_mapping);
> +#endif
> diff --git a/mm/migrate.c b/mm/migrate.c
> index fff63e139767..b668970acd11 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -355,7 +355,7 @@ static int expected_page_refs(struct address_space *mapping, struct page *page)
>  	 */
>  	expected_count += is_device_private_page(page);
>  	if (mapping)
> -		expected_count += thp_nr_pages(page) + page_has_private(page);
> +		expected_count += compound_nr(page) + page_has_private(page);

Why this change? Is it because you are passing folio->page to expected_page_refs() below
and the nr_pages for the folio should be checked with folio_nr_pages() which just returns
compound_nr()?

The change seems to imply that folio can be compound page and migrated even when THP is
disabled. Is it the case or something else?

>
>  	return expected_count;
>  }
> @@ -368,74 +368,75 @@ static int expected_page_refs(struct address_space *mapping, struct page *page)
>   * 2 for pages with a mapping
>   * 3 for pages with a mapping and PagePrivate/PagePrivate2 set.
>   */
> -int migrate_page_move_mapping(struct address_space *mapping,
> -		struct page *newpage, struct page *page, int extra_count)
> +int folio_migrate_mapping(struct address_space *mapping,
> +		struct folio *newfolio, struct folio *folio, int extra_count)
>  {
> -	XA_STATE(xas, &mapping->i_pages, page_index(page));
> +	XA_STATE(xas, &mapping->i_pages, folio_index(folio));
>  	struct zone *oldzone, *newzone;
>  	int dirty;
> -	int expected_count = expected_page_refs(mapping, page) + extra_count;
> -	int nr = thp_nr_pages(page);
> +	int expected_count = expected_page_refs(mapping, &folio->page) + extra_count;
> +	int nr = folio_nr_pages(folio);

—
Best Regards,
Yan Zi

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux