Re: [Lsf-pc] [LSF/MM/BPF TOPIC] HGM for hugetlbfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/08/23 14:54, Dan Williams wrote:
> Mike Kravetz wrote:
> > On 06/07/23 10:13, David Hildenbrand wrote:
> [..]
> > I am struggling with how to support existing hugetlb users that are running
> > into issues like memory errors on hugetlb pages today.  And, yes that is a
> > source of real customer issues.  They are not really happy with the current
> > design that a single error will take out a 1G page, and their VM or
> > application.  Moving to THP is not likely as they really want a pre-allocated
> > pool of 1G pages.  I just don't have a good answer for them.
> 
> Is it the reporting interface, or the fact that the page gets offlined
> too quickly?

Somewhat both.

Reporting says the error starts at the beginning of the huge page with
length of huge page size.  So, actual error is not really isolated.  In
a way, this is 'desired' since hugetlb pages are treated as a single page.

Once a page is marked with poison, we prevent subsequent faults of the page.
Since a hugetlb page is treated as a single page, the 'good data' can
not be accessed as there is no way to fault in smaller pieces (4K pages)
of the page.  Jiaqi Yan actually put together patches to 'read' the good
4K pages within the hugetlb page [1], but we will not always have a file
handle.

[1] https://lore.kernel.org/linux-mm/20230517160948.811355-1-jiaqiyan@xxxxxxxxxx/

>              I.e. if the 1GB page was unmapped from userspace per usual
> memory-failure, but the application had an opportunity to record what
> got clobbered on a smaller granularity and then ask the kernel to repair
> the page, would that relieve some pain?

Sounds interesting.

>                                         Where repair is atomically
> writing a full cacheline of zeroes,

Excuse my hardware ignorance ... In this case, I assume writing zeroes
will repair the error on the original memory?  This would then result
in data loss/zeroed, BUT the memory could be accessed without error.
So, the original 1G page could be used by the application (with data
missing of course).

>                                     or copying around the poison to a
> new page and returning the old one to broken down and only have the
> single 4K page with error quarantined.

I suppose we could do that within the kernel, however user space would
have the ability to do this IF it could access the good 4K pages.  That
is essentially what we do with THP pages by splitting and just marking a
single 4K page with poison.  That is the functionality proposed by HGM.

It seems like asking the kernel to 'repair the page' would be a new
hugetlb specific interface.  Or, could there be other users?
-- 
Mike Kravetz




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux