Re: [Lsf-pc] [LSF/MM/BPF TOPIC] HGM for hugetlbfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 8, 2023 at 8:36 PM Dan Williams <dan.j.williams@xxxxxxxxx> wrote:
>
> [ add Jane ]
>
> Mike Kravetz wrote:
> > On 06/08/23 14:54, Dan Williams wrote:
> > > Mike Kravetz wrote:
> > > > On 06/07/23 10:13, David Hildenbrand wrote:
> > > [..]
> > > > I am struggling with how to support existing hugetlb users that are running
> > > > into issues like memory errors on hugetlb pages today.  And, yes that is a
> > > > source of real customer issues.  They are not really happy with the current
> > > > design that a single error will take out a 1G page, and their VM or
> > > > application.  Moving to THP is not likely as they really want a pre-allocated
> > > > pool of 1G pages.  I just don't have a good answer for them.
> > >
> > > Is it the reporting interface, or the fact that the page gets offlined
> > > too quickly?
> >
> > Somewhat both.
> >
> > Reporting says the error starts at the beginning of the huge page with
> > length of huge page size.  So, actual error is not really isolated.  In
> > a way, this is 'desired' since hugetlb pages are treated as a single page.
>
> On x86 the error reporting is always by cacheline, but it's the
> memory-failure code that turns that into a SIGBUS with the sigaction
> info indicating failure relative to the page-size. That interface has
> been awkward for PMEM as well as Jane can attest.
>
> > Once a page is marked with poison, we prevent subsequent faults of the page.
>
> That makes sense.
>
> > Since a hugetlb page is treated as a single page, the 'good data' can
> > not be accessed as there is no way to fault in smaller pieces (4K pages)
> > of the page.  Jiaqi Yan actually put together patches to 'read' the good
> > 4K pages within the hugetlb page [1], but we will not always have a file
> > handle.
>
> That mitigation is also a problem for device-dax that makes hard
> guarantees that mappings will always be aligned, mainly to keep the
> driver simple.
>
> >
> > [1] https://lore.kernel.org/linux-mm/20230517160948.811355-1-jiaqiyan@xxxxxxxxxx/
> >
> > >              I.e. if the 1GB page was unmapped from userspace per usual
> > > memory-failure, but the application had an opportunity to record what
> > > got clobbered on a smaller granularity and then ask the kernel to repair
> > > the page, would that relieve some pain?
> >
> > Sounds interesting.
> >
> > >                                         Where repair is atomically
> > > writing a full cacheline of zeroes,
> >
> > Excuse my hardware ignorance ... In this case, I assume writing zeroes
> > will repair the error on the original memory?  This would then result
> > in data loss/zeroed, BUT the memory could be accessed without error.
> > So, the original 1G page could be used by the application (with data
> > missing of course).
>
> Yes, but it depends. Sometimes poison is a permanent error and no amount
> of writing to it can correct the error, sometimes it is transient like a
> high energy particle flipped a bit in the cell, and sometime it is
> deposited from outside the memory controller like the case when a
> poisoned dirty cacheline gets written back.
>
> The majority of the time, outside catastrophic loss of a whole rank,
> it's only 64-bytes at a time that has gone bad.
>
> > >                                     or copying around the poison to a
> > > new page and returning the old one to broken down and only have the
> > > single 4K page with error quarantined.
> >
> > I suppose we could do that within the kernel, however user space would
> > have the ability to do this IF it could access the good 4K pages.  That
> > is essentially what we do with THP pages by splitting and just marking a
> > single 4K page with poison.  That is the functionality proposed by HGM.
> >
> > It seems like asking the kernel to 'repair the page' would be a new
> > hugetlb specific interface.  Or, could there be other users?
>
> I think there are other users for this.
>
> Jane worked on DAX_RECOVERY_WRITE support which is a way for a DIRECT_IO
> write on a DAX file (guaranteed to be page aligned) to plumb an
> operation to the pmem driver top repair a location that is not mmap'able
> due to hardware poison.
>
> However that's fsdax specific. It would be nice to be able to have
> SIGBUS handlers that can ask the kernel to overwrite the cacheline and
> restore access to the rest of the page. It seems unfortunate to live
> with throwing away 1GB - 64-bytes of capacity on the first sign of
> trouble.
>
> The nice thing about hugetlb compared to pmem is that you do not need to
> repair in place, in case the error is permanent. Conceivably the kernel
> could allocate a new page, perform the copy of the good bits on behalf
> of the application, and let the page be mapped again. If that copy
> encounters poison rinse and repeat until it succeeds or the application
> says, "you know what, I think it's dead, thanks anyway".

I'm not sure if this is compatible with what we need for VMs. We can't
overwrite/zero guest memory unless the guest were somehow enlightened,
which we can't guarantee. We can't allow the guest to keep triggering
memory errors -- i.e., we have to unmap the memory at least from the
EPT (ideally by unmapping it from the userspace page tables).

So, we could:
1. Do what HGM does and have the kernel unmap the 4K page in the
userspace page tables.
2. On-the-fly change the VMA for our hugepage to not be HugeTLB
anymore, and re-map all the good 4K pages.
3. Tell userspace that it must change its mapping from HugeTLB to
something else, and move the good 4K pages into the new mapping.

(2) feels like more complexity than (1). If a user created a
MAP_HUGETLB mapping and now it isn't HugeTLB, that feels wrong.

(3) today isn't possible, but with Jiaqi's improvement to hugetlbfs
read() it becomes possible. We'll need to have an extra 1G of memory
while we are doing this copying/recovery, and it isn't transparent at
all.

(3) is additionally painful when considering live migration. We have
to keep the 4K page unmapped after the migration (to keep it poisoned
from the guest's perspective), but the page is no longer *actually*
poisoned on the host. To get the memory we need to back our
fake-poisoned pages with tmpfs, we would need to free our 1G page.
Getting that page back later isn't trivial.

So (1) still seems like the most natural solution, so the question
becomes: how exactly do we implement 4K unmapping? And that brings us
back to the main question about how HGM should be implemented in
general.

>
> It's something that has been on the "when there is time pile", but maybe
> instead of making hugetlb more complicated this effort goes to make
> memory-failure more capable.

I like this line of thinking, but as I see it right now, we still need
something like HGM -- maybe I'm wrong. :)

- James





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux