Re: [RFC PATCH v1 1/4] mm, hwpoison, hugetlb: introduce SUBPAGE_INDEX_HWPOISON to save raw error page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 12, 2022 at 10:31:42PM +0000, Jane Chu wrote:
> On 4/26/2022 9:28 PM, Naoya Horiguchi wrote:
> > From: Naoya Horiguchi <naoya.horiguchi@xxxxxxx>
> > 
> > When handling memory error on a hugetlb page, the error handler tries to
> > dissolve and turn it into 4kB pages.  If it's successfully dissolved,
> > PageHWPoison flag is moved to the raw error page, so but that's all
> > right.  However, dissolve sometimes fails, then the error page is left
nnn> > as hwpoisoned hugepage. It's useful if we can retry to dissolve it to
> > save healthy pages, but that's not possible now because the information
> > about where the raw error page is lost.
> > 
> > Use the private field of a tail page to keep that information.  The code
> > path of shrinking hugepage pool used this info to try delayed dissolve.
> > 
> > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@xxxxxxx>
> > ---
> >   include/linux/hugetlb.h | 24 ++++++++++++++++++++++++
> >   mm/hugetlb.c            |  9 +++++++++
> >   mm/memory-failure.c     |  2 ++
> >   3 files changed, 35 insertions(+)
> > 
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index ac2a1d758a80..689e69cb556b 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -42,6 +42,9 @@ enum {
> >   	SUBPAGE_INDEX_CGROUP,		/* reuse page->private */
> >   	SUBPAGE_INDEX_CGROUP_RSVD,	/* reuse page->private */
> >   	__MAX_CGROUP_SUBPAGE_INDEX = SUBPAGE_INDEX_CGROUP_RSVD,
> > +#endif
> > +#ifdef CONFIG_CGROUP_HUGETLB
> > +	SUBPAGE_INDEX_HWPOISON,
> >   #endif
> >   	__NR_USED_SUBPAGE,
> >   };
> > @@ -784,6 +787,27 @@ extern int dissolve_free_huge_page(struct page *page);
> >   extern int dissolve_free_huge_pages(unsigned long start_pfn,
> >   				    unsigned long end_pfn);
> >   
> > +#ifdef CONFIG_MEMORY_FAILURE
> > +/*
> > + * pointer to raw error page is located in hpage[SUBPAGE_INDEX_HWPOISON].private
> > + */
> > +static inline struct page *hugetlb_page_hwpoison(struct page *hpage)
> > +{
> > +	return (void *)page_private(hpage + SUBPAGE_INDEX_HWPOISON);
> > +}
> > +
> > +static inline void hugetlb_set_page_hwpoison(struct page *hpage,
> > +					struct page *page)
> > +{
> > +	set_page_private(hpage + SUBPAGE_INDEX_HWPOISON, (unsigned long)page);
> > +}
> 
> What happens if the ->private field is already holding a poisoned page 
> pointer?  that is, in a scenario of multiple poisoned pages within a 
> hugepage, what to do?  mark the entire hpage poisoned?

Hi Jane,

Current version does not consider multiple poisoned pages scenario,
so if that happens, ->private field would be simply overwritten.
But in this patch hugetlb_set_page_hwpoison() is called just after
"if (TestSetPageHWPoison(head))" check, so hugetlb_set_page_hwpoison()
is not expected to be called more than once on a single hugepage.

When we try to support multiple poison scenario, we may add some code
in "already hwpoisoned" path to store additional info about the raw
error page. The implementation detail is still to be determined.

Thanks,
Naoya Horiguchi




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux