On 2022/6/24 7:51, Naoya Horiguchi wrote: > From: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> > > When handling memory error on a hugetlb page, the error handler tries to > dissolve and turn it into 4kB pages. If it's successfully dissolved, > PageHWPoison flag is moved to the raw error page, so that's all right. > However, dissolve sometimes fails, then the error page is left as > hwpoisoned hugepage. It's useful if we can retry to dissolve it to save > healthy pages, but that's not possible now because the information about > where the raw error pages is lost. > > Use the private field of a few tail pages to keep that information. The > code path of shrinking hugepage pool uses this info to try delayed dissolve. > In order to remember multiple errors in a hugepage, a singly-linked list > originated from SUBPAGE_INDEX_HWPOISON-th tail page is constructed. Only > simple operations (adding an entry or clearing all) are required and the > list is assumed not to be very long, so this simple data structure should > be enough. > > If we failed to save raw error info, the hwpoison hugepage has errors on > unknown subpage, then this new saving mechanism does not work any more, > so disable saving new raw error info and freeing hwpoison hugepages. > > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> Many thanks for your patch. This patch looks good to me. Some nits below. > <snip> > > +static inline int hugetlb_set_page_hwpoison(struct page *hpage, > + struct page *page) > +{ > + struct llist_head *head; > + struct raw_hwp_page *raw_hwp; > + struct llist_node *t, *tnode; > + int ret; > + > + /* > + * Once the hwpoison hugepage has lost reliable raw error info, > + * there is little mean to keep additional error info precisely, It should be s/mean/meaning/ ? > + * so skip to add additional raw error info. > + */ > + if (raw_hwp_unreliable(hpage)) > + return -EHWPOISON; > + head = raw_hwp_list_head(hpage); > + llist_for_each_safe(tnode, t, head->first) { > + struct raw_hwp_page *p = container_of(tnode, struct raw_hwp_page, node); > + > + if (p->page == page) > + return -EHWPOISON; > + } > + > + ret = TestSetPageHWPoison(hpage) ? -EHWPOISON : 0; > + /* the first error event will be counted in action_result(). */ > + if (ret) > + num_poisoned_pages_inc(); > + > + raw_hwp = kmalloc(sizeof(struct raw_hwp_page), GFP_KERNEL); > + if (raw_hwp) { > + raw_hwp->page = page; > + llist_add(&raw_hwp->node, head); > + } else { > + /* > + * Failed to save raw error info. We no longer trace all > + * hwpoisoned subpages, and we need refuse to free/dissolve > + * this hwpoisoned hugepage. > + */ > + set_raw_hwp_unreliable(hpage); > + return ret; This "return ret" can be combined into the below one? > + } > + return ret; > +} > + > +inline int hugetlb_clear_page_hwpoison(struct page *hpage) > +{ > + struct llist_head *head; > + struct llist_node *t, *tnode; > + > + if (raw_hwp_unreliable(hpage)) > + return -EBUSY; Can we try freeing the memory of raw_hwp_list to save possible memory? It seems raw_hwp_list becomes unneeded when raw_hwp_unreliable. Thanks. > + ClearPageHWPoison(hpage); > + head = raw_hwp_list_head(hpage); > + llist_for_each_safe(tnode, t, head->first) { > + struct raw_hwp_page *p = container_of(tnode, struct raw_hwp_page, node); > + > + SetPageHWPoison(p->page); > + kfree(p); > + } > + llist_del_all(head); > + return 0; > +} > + > /* > * Called from hugetlb code with hugetlb_lock held. > * > @@ -1533,7 +1624,7 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags) > goto out; > } > > - if (TestSetPageHWPoison(head)) { > + if (hugetlb_set_page_hwpoison(head, page)) { > ret = -EHWPOISON; > goto out; > } > @@ -1585,7 +1676,7 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb > lock_page(head); > > if (hwpoison_filter(p)) { > - ClearPageHWPoison(head); > + hugetlb_clear_page_hwpoison(head); > res = -EOPNOTSUPP; > goto out; > } >