On Mon, Jun 27, 2022 at 05:26:01PM +0800, Muchun Song wrote: > On Fri, Jun 24, 2022 at 08:51:48AM +0900, Naoya Horiguchi wrote: > > From: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> > > > > When handling memory error on a hugetlb page, the error handler tries to > > dissolve and turn it into 4kB pages. If it's successfully dissolved, > > PageHWPoison flag is moved to the raw error page, so that's all right. > > However, dissolve sometimes fails, then the error page is left as > > hwpoisoned hugepage. It's useful if we can retry to dissolve it to save > > healthy pages, but that's not possible now because the information about > > where the raw error pages is lost. > > > > Use the private field of a few tail pages to keep that information. The > > code path of shrinking hugepage pool uses this info to try delayed dissolve. > > In order to remember multiple errors in a hugepage, a singly-linked list > > originated from SUBPAGE_INDEX_HWPOISON-th tail page is constructed. Only > > simple operations (adding an entry or clearing all) are required and the > > list is assumed not to be very long, so this simple data structure should > > be enough. > > > > If we failed to save raw error info, the hwpoison hugepage has errors on > > unknown subpage, then this new saving mechanism does not work any more, > > so disable saving new raw error info and freeing hwpoison hugepages. > > > > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> > > Thanks for your work on this. I have several quesions below. > ... > > @@ -1499,6 +1499,97 @@ static int try_to_split_thp_page(struct page *page, const char *msg) > > } > > > > #ifdef CONFIG_HUGETLB_PAGE > > +/* > > + * Struct raw_hwp_page represents information about "raw error page", > > + * constructing singly linked list originated from ->private field of > > + * SUBPAGE_INDEX_HWPOISON-th tail page. > > + */ > > +struct raw_hwp_page { > > + struct llist_node node; > > + struct page *page; > > +}; > > + > > +static inline struct llist_head *raw_hwp_list_head(struct page *hpage) > > +{ > > + return (struct llist_head *)&page_private(hpage + SUBPAGE_INDEX_HWPOISON); > > +} > > + > > +static inline int raw_hwp_unreliable(struct page *hpage) > > +{ > > + return page_private(hpage + SUBPAGE_INDEX_HWPOISON_UNRELIABLE); > > +} > > + > > +static inline void set_raw_hwp_unreliable(struct page *hpage) > > +{ > > + set_page_private(hpage + SUBPAGE_INDEX_HWPOISON_UNRELIABLE, 1); > > +} > > Why not use HPAGEFLAG(HwpoisonUnreliable, hwpoison_unreliable) directly? > OK, that sounds better, I'll do it. > > + > > +/* > > + * about race consideration > > + */ > > +static inline int hugetlb_set_page_hwpoison(struct page *hpage, > > + struct page *page) > > +{ > > + struct llist_head *head; > > + struct raw_hwp_page *raw_hwp; > > + struct llist_node *t, *tnode; > > + int ret; > > + > > + /* > > + * Once the hwpoison hugepage has lost reliable raw error info, > > + * there is little mean to keep additional error info precisely, > > + * so skip to add additional raw error info. > > + */ > > + if (raw_hwp_unreliable(hpage)) > > + return -EHWPOISON; > > + head = raw_hwp_list_head(hpage); > > + llist_for_each_safe(tnode, t, head->first) { > > + struct raw_hwp_page *p = container_of(tnode, struct raw_hwp_page, node); > > + > > + if (p->page == page) > > + return -EHWPOISON; > > + } > > + > > + ret = TestSetPageHWPoison(hpage) ? -EHWPOISON : 0; > > + /* the first error event will be counted in action_result(). */ > > + if (ret) > > + num_poisoned_pages_inc(); > > + > > + raw_hwp = kmalloc(sizeof(struct raw_hwp_page), GFP_KERNEL); > > This function can be called in atomic context, GFP_ATOMIC should be used > here. OK, I'll use GFP_ATOMIC. > > > + if (raw_hwp) { > > + raw_hwp->page = page; > > + llist_add(&raw_hwp->node, head); > > The maximum amount of items in the list is one, right? The maximum is the number of subpages in the hugepage (512 for 2MB hugepage, 262144 for 1GB hugepage). > > > + } else { > > + /* > > + * Failed to save raw error info. We no longer trace all > > + * hwpoisoned subpages, and we need refuse to free/dissolve > > + * this hwpoisoned hugepage. > > + */ > > + set_raw_hwp_unreliable(hpage); > > + return ret; > > + } > > + return ret; > > +} > > + > > +inline int hugetlb_clear_page_hwpoison(struct page *hpage) > > +{ > > + struct llist_head *head; > > + struct llist_node *t, *tnode; > > + > > + if (raw_hwp_unreliable(hpage)) > > + return -EBUSY; > > IIUC, we use head page's PageHWPoison to synchronize hugetlb_clear_page_hwpoison() > and hugetlb_set_page_hwpoison(), right? If so, who can set hwp_unreliable here? Sorry if I might miss your point, but raw_hwp_unreliable is set when allocating raw_hwp_page failed. hugetlb_set_page_hwpoison() can be called multiple times on a hugepage and if one of the calls fails, the hwpoisoned hugepage becomes unreliable. BTW, as you pointed out above, if we switch to passing GFP_ATOMIC to kmalloc(), the kmalloc() never fails, so we no longer have to implement this unreliable flag, so things get simpler. > > > + ClearPageHWPoison(hpage); > > + head = raw_hwp_list_head(hpage); > > + llist_for_each_safe(tnode, t, head->first) { > > Is it possible that a new item is added hugetlb_set_page_hwpoison() and we do not > traverse it (we have cleared page's PageHWPoison)? Then we ignored a real hwpoison > page, right? Maybe you are mentioning the race like below. Yes, that's possible. CPU 0 CPU 1 free_huge_page lock hugetlb_lock ClearHPageMigratable unlock hugetlb_lock get_huge_page_for_hwpoison lock hugetlb_lock __get_huge_page_for_hwpoison hugetlb_set_page_hwpoison allocate raw_hwp_page TestSetPageHWPoison update_and_free_page __update_and_free_page if (PageHWPoison) hugetlb_clear_page_hwpoison TestClearPageHWPoison // remove all list items llist_add unlock hugetlb_lock The end result seems not critical (leaking raced raw_hwp_page?), but we need fix. > > > + struct raw_hwp_page *p = container_of(tnode, struct raw_hwp_page, node); > > + > > + SetPageHWPoison(p->page); > > + kfree(p); > > + } > > + llist_del_all(head); > > If the above issue exists, moving ClearPageHWPoison(hpage) to here could > fix it. We should clear PageHWPoison carefully since the head page is > also can be poisoned. The reason why I put ClearPageHWPoison(hpage) before llist_for_each_safe() was that raw error page can be the head page. But this can be solved with some additional code to remember whether raw_hwp_page list has an item associated with the head page. Or another approach in my mind now is to call hugetlb_clear_page_hwpoison() with taking mf_mutex. > > Thanks. Thank you for valuable feedbacks. - Naoya Horiguchi > > > + return 0; > > +} > > + > > /* > > * Called from hugetlb code with hugetlb_lock held. > > * > > @@ -1533,7 +1624,7 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags) > > goto out; > > } > > > > - if (TestSetPageHWPoison(head)) { > > + if (hugetlb_set_page_hwpoison(head, page)) { > > ret = -EHWPOISON; > > goto out; > > } > > @@ -1585,7 +1676,7 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb > > lock_page(head); > > > > if (hwpoison_filter(p)) { > > - ClearPageHWPoison(head); > > + hugetlb_clear_page_hwpoison(head); > > res = -EOPNOTSUPP; > > goto out; > > } > > -- > > 2.25.1 > > > >