On 2024/8/17 16:49, Kefeng Wang wrote: > Add unmap_posioned_folio() helper which will be reused by > do_migrate_range() from memory hotplug soon. > > Acked-by: David Hildenbrand <david@xxxxxxxxxx> > Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> > --- > mm/internal.h | 9 +++++++++ > mm/memory-failure.c | 43 ++++++++++++++++++++++++++----------------- > 2 files changed, 35 insertions(+), 17 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index d7aac802efa5..74490b8ac63d 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -1045,6 +1045,8 @@ static inline int find_next_best_node(int node, nodemask_t *used_node_mask) > /* > * mm/memory-failure.c > */ > +#ifdef CONFIG_MEMORY_FAILURE > +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu); > void shake_folio(struct folio *folio); > extern int hwpoison_filter(struct page *p); > > @@ -1065,6 +1067,13 @@ void add_to_kill_ksm(struct task_struct *tsk, struct page *p, > unsigned long ksm_addr); > unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); > > +#else > +static inline int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) > +{ > + return 0; > +} > +#endif > + > extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, > unsigned long, unsigned long, > unsigned long, unsigned long); > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 353254537b54..93848330de1f 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -1554,6 +1554,30 @@ static int get_hwpoison_page(struct page *p, unsigned long flags) > return ret; > } > > +int unmap_posioned_folio(struct folio *folio, enum ttu_flags ttu) Should it be unmap_poisoned_folio ? i.e. s/posioned/poisoned/ :) > +{ > + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { > + struct address_space *mapping; > + /* > + * For hugetlb pages in shared mappings, try_to_unmap > + * could potentially call huge_pmd_unshare. Because of > + * this, take semaphore in write mode here and set > + * TTU_RMAP_LOCKED to indicate we have taken the lock > + * at this higher level. > + */ > + mapping = hugetlb_folio_mapping_lock_write(folio); > + if (!mapping) > + return -EAGAIN; > + > + try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); > + i_mmap_unlock_write(mapping); > + } else { > + try_to_unmap(folio, ttu); > + } > + > + return 0; It seems return value is not really needed? We will check folio_mapped below. > +} > + > /* > * Do all that is necessary to remove user space mappings. Unmap > * the pages and send SIGBUS to the processes if the data was dirty. > @@ -1615,23 +1639,8 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p, > */ > collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); > > - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { > - /* > - * For hugetlb pages in shared mappings, try_to_unmap > - * could potentially call huge_pmd_unshare. Because of > - * this, take semaphore in write mode here and set > - * TTU_RMAP_LOCKED to indicate we have taken the lock > - * at this higher level. > - */ > - mapping = hugetlb_folio_mapping_lock_write(folio); > - if (mapping) { > - try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); > - i_mmap_unlock_write(mapping); > - } else > - pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); > - } else { > - try_to_unmap(folio, ttu); > - } > + if (unmap_posioned_folio(folio, ttu)) > + pr_info("%#lx: could not lock mapping for mapped huge page\n", pfn); It might be better to pr_info inside the unmap_posioned_folio? unmap_posioned_folio might fail due to other reasons in the future anyway. Thanks. .