Re: [Lsf-pc] [LSF/MM/BPF TOPIC] HGM for hugetlbfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ add Jane ]

Mike Kravetz wrote:
> On 06/08/23 14:54, Dan Williams wrote:
> > Mike Kravetz wrote:
> > > On 06/07/23 10:13, David Hildenbrand wrote:
> > [..]
> > > I am struggling with how to support existing hugetlb users that are running
> > > into issues like memory errors on hugetlb pages today.  And, yes that is a
> > > source of real customer issues.  They are not really happy with the current
> > > design that a single error will take out a 1G page, and their VM or
> > > application.  Moving to THP is not likely as they really want a pre-allocated
> > > pool of 1G pages.  I just don't have a good answer for them.
> > 
> > Is it the reporting interface, or the fact that the page gets offlined
> > too quickly?
> 
> Somewhat both.
> 
> Reporting says the error starts at the beginning of the huge page with
> length of huge page size.  So, actual error is not really isolated.  In
> a way, this is 'desired' since hugetlb pages are treated as a single page.

On x86 the error reporting is always by cacheline, but it's the
memory-failure code that turns that into a SIGBUS with the sigaction
info indicating failure relative to the page-size. That interface has
been awkward for PMEM as well as Jane can attest.

> Once a page is marked with poison, we prevent subsequent faults of the page.

That makes sense.

> Since a hugetlb page is treated as a single page, the 'good data' can
> not be accessed as there is no way to fault in smaller pieces (4K pages)
> of the page.  Jiaqi Yan actually put together patches to 'read' the good
> 4K pages within the hugetlb page [1], but we will not always have a file
> handle.

That mitigation is also a problem for device-dax that makes hard
guarantees that mappings will always be aligned, mainly to keep the
driver simple.

> 
> [1] https://lore.kernel.org/linux-mm/20230517160948.811355-1-jiaqiyan@xxxxxxxxxx/
> 
> >              I.e. if the 1GB page was unmapped from userspace per usual
> > memory-failure, but the application had an opportunity to record what
> > got clobbered on a smaller granularity and then ask the kernel to repair
> > the page, would that relieve some pain?
> 
> Sounds interesting.
> 
> >                                         Where repair is atomically
> > writing a full cacheline of zeroes,
> 
> Excuse my hardware ignorance ... In this case, I assume writing zeroes
> will repair the error on the original memory?  This would then result
> in data loss/zeroed, BUT the memory could be accessed without error.
> So, the original 1G page could be used by the application (with data
> missing of course).

Yes, but it depends. Sometimes poison is a permanent error and no amount
of writing to it can correct the error, sometimes it is transient like a
high energy particle flipped a bit in the cell, and sometime it is
deposited from outside the memory controller like the case when a
poisoned dirty cacheline gets written back.

The majority of the time, outside catastrophic loss of a whole rank,
it's only 64-bytes at a time that has gone bad.

> >                                     or copying around the poison to a
> > new page and returning the old one to broken down and only have the
> > single 4K page with error quarantined.
> 
> I suppose we could do that within the kernel, however user space would
> have the ability to do this IF it could access the good 4K pages.  That
> is essentially what we do with THP pages by splitting and just marking a
> single 4K page with poison.  That is the functionality proposed by HGM.
> 
> It seems like asking the kernel to 'repair the page' would be a new
> hugetlb specific interface.  Or, could there be other users?

I think there are other users for this.

Jane worked on DAX_RECOVERY_WRITE support which is a way for a DIRECT_IO
write on a DAX file (guaranteed to be page aligned) to plumb an
operation to the pmem driver top repair a location that is not mmap'able
due to hardware poison.

However that's fsdax specific. It would be nice to be able to have
SIGBUS handlers that can ask the kernel to overwrite the cacheline and
restore access to the rest of the page. It seems unfortunate to live
with throwing away 1GB - 64-bytes of capacity on the first sign of
trouble.

The nice thing about hugetlb compared to pmem is that you do not need to
repair in place, in case the error is permanent. Conceivably the kernel
could allocate a new page, perform the copy of the good bits on behalf
of the application, and let the page be mapped again. If that copy
encounters poison rinse and repeat until it succeeds or the application
says, "you know what, I think it's dead, thanks anyway".

It's something that has been on the "when there is time pile", but maybe
instead of making hugetlb more complicated this effort goes to make
memory-failure more capable.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux