On Sat, Oct 15, 2022 at 09:58:09AM +0800, Miaohe Lin wrote: > On 2022/10/7 9:07, Naoya Horiguchi wrote: ... > > @@ -1785,7 +1785,8 @@ void hugetlb_clear_page_hwpoison(struct page *hpage) > > * -EBUSY - the hugepage is busy (try to retry) > > * -EHWPOISON - the hugepage is already hwpoisoned > > */ > > -int __get_huge_page_for_hwpoison(unsigned long pfn, int flags) > > +int __get_huge_page_for_hwpoison(unsigned long pfn, int flags, > > + bool *migratable_cleared) > > { > > struct page *page = pfn_to_page(pfn); > > struct page *head = compound_head(page); > > @@ -1815,6 +1816,15 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags) > > goto out; > > } > > > > + /* > > + * Clearing HPageMigratable for hwpoisoned hugepages to prevent them > > + * from being migrated by memory hotremove. > > + */ > > + if (count_increased) { > > + *migratable_cleared = true; > > + ClearHPageMigratable(head); > > I think I might be nitpicking... But it seems ClearHPageMigratable is not enough here. > 1. In MF_COUNT_INCREASED case, we don't know whether HPageMigratable is set. > 2. Even if HPageMigratable is set, there might be a race window before we clear HPageMigratable? Maybe this race is what I mentioned in https://lore.kernel.org/linux-mm/20220928012647.GA597297@xxxxxxxxx/ (the second scenario). My stance to this case is that the hugepage is not hwpoisoned even if some hardware (not visible to kernel) issue is in it, so hwpoison handler can/may not cope with the race. I guess that this could be handled by applying memcpy_mcsafe() mechanism to page migration, but I'm not sure of the feasibility. > So "*migratable_cleared = TestClearHPageMigratable" might be better? But I might be wrong. Thanks, this seems work for 1 (I need testing to check it), so I'll do this in the next post. > > With above fixed (if it's really a problem), this patch looks good to me. > > Reviewed-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> Thank you very much. - Naoya Horiguchi