On 11 Jun 16:47, Miaohe Lin wrote: > Transhuge swapcaches won't be freed in __collapse_huge_page_copy(). > It's because release_pte_page() is not called for these pages and > thus free_page_and_swap_cache can't grab the page lock. These pages > won't be freed from swap cache even if we are the only user until > next time reclaim. It shouldn't hurt indeed, but we could try to > free these pages to save more memory for system. > > Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> > --- > include/linux/swap.h | 5 +++++ > mm/khugepaged.c | 1 + > mm/swap.h | 5 ----- > 3 files changed, 6 insertions(+), 5 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 8672a7123ccd..ccb83b12b724 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -456,6 +456,7 @@ static inline unsigned long total_swapcache_pages(void) > return global_node_page_state(NR_SWAPCACHE); > } > > +extern void free_swap_cache(struct page *page); > extern void free_page_and_swap_cache(struct page *); > extern void free_pages_and_swap_cache(struct page **, int); > /* linux/mm/swapfile.c */ > @@ -540,6 +541,10 @@ static inline void put_swap_device(struct swap_info_struct *si) > /* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */ > #define free_swap_and_cache(e) is_pfn_swap_entry(e) > > +static inline void free_swap_cache(struct page *page) > +{ > +} > + > static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask) > { > return 0; > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index ee0a719c8be9..52109ad13f78 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -756,6 +756,7 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page, > list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { > list_del(&src_page->lru); > release_pte_page(src_page); > + free_swap_cache(src_page); > } > } Aside: in __collapse_huge_page_isolate() (and also here) why can't we just check PageCompound(page) && page == compound_head(page) to only act on compound pages once? AFAIK this would alleviate this compound_pagelist business.. Anyways, as-is, free_page_and_swap_cache() won't be able to do try_to_free_swap(), since it can't grab page lock, put it will call put_page(). I think (?) the last page ref might be dropped in release_pte_page(), so should free_swap_cache() come before it? > > diff --git a/mm/swap.h b/mm/swap.h > index 0193797b0c92..863f6086c916 100644 > --- a/mm/swap.h > +++ b/mm/swap.h > @@ -41,7 +41,6 @@ void __delete_from_swap_cache(struct page *page, > void delete_from_swap_cache(struct page *page); > void clear_shadow_from_swap_cache(int type, unsigned long begin, > unsigned long end); > -void free_swap_cache(struct page *page); > struct page *lookup_swap_cache(swp_entry_t entry, > struct vm_area_struct *vma, > unsigned long addr); > @@ -81,10 +80,6 @@ static inline struct address_space *swap_address_space(swp_entry_t entry) > return NULL; > } > > -static inline void free_swap_cache(struct page *page) > -{ > -} > - > static inline void show_swap_cache_info(void) > { > } > -- > 2.23.0 > >