On Tue, Jun 4, 2024 at 12:19 AM Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
On 2024/6/1 5:34, Jiaqi Yan wrote:
Correctable memory errors are very common on servers with large
amount of memory, and are corrected by ECC, but with two
pain points to users:
1. Correction usually happens on the fly and adds latency overhead
2. Not-fully-proved theory states excessive correctable memory
errors can develop into uncorrectable memory error.
Thanks for your patch.
Thanks Miaohe, sorry I missed your message (Gmail mistakenly put it in
my spam folder).
Soft offline is kernel's additional solution for memory pages
having (excessive) corrected memory errors. Impacted page is migrated
to healthy page if it is in use, then the original page is discarded
for any future use.
The actual policy on whether (and when) to soft offline should be
maintained by userspace, especially in case of HugeTLB hugepages.
Soft-offline dissolves a hugepage, either in-use or free, into
chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage.
If userspace has not acknowledged such behavior, it may be surprised
when later mmap hugepages MAP_FAILED due to lack of hugepages.
For in use hugetlb folio case, migrate_pages() is called. The hugetlb pool
capacity won't be modified in that case. So I assume you're referring to the
I don't think so.
For in-use hugetlb folio case, after migrate_pages, kernel will
dissolve_free_hugetlb_folio the src hugetlb folio. At this point
refcount of src hugetlb folio should be zero already, and
remove_hugetlb_folio will reduce the hugetlb pool capacity (both
nr_hugepages and free_hugepages) accordingly.
For the free hugetlb folio case, dissolving also happens. But CE on
free pages should be very rare (since no one is accessing except
patrol scrubber).
One of my test cases in patch 2/3 validates my point: the test case
MADV_SOFT_OFFLINE a mapped page and at the point soft offline
succeeds, both nr_hugepages and nr_freepages are reduced by 1.
free hugetlb folio case? The Hugetlb pool capacity is reduced in that case.
But if we don't do that, we might encounter uncorrectable memory error later
If your concern is more correctable error will develop into more
severe uncorrectable, your concern is absolutely valid. There is a
tradeoff between reliability vs performance (availability of hugetlb
pages), but IMO should be decided by userspace.
which will be more severe? Will it be better to add a way to compensate the
capacity?
Corner cases: What if finding physically contiguous memory takes too
long? What if we can't find any physically contiguous memory to
compensate? (then hugetlb pool will still need to be reduced).
If we treat "compensate" as an improvement to the overall soft offline
process, it is something we can do in future and it is something
orthogonal to this control API, right? I think if userspace explicitly
tells kernel to soft offline, then they are also well-prepared for the
corner cases above.
In addition, discarding the entire 1G memory page only because of
corrected memory errors sounds very costly and kernel better not
doing under the hood. But today there are at least 2 such cases:
1. GHES driver sees both GHES_SEV_CORRECTED and
CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER.
2. RAS Correctable Errors Collector counts correctable errors per
PFN and when the counter for a PFN reaches threshold
In both cases, userspace has no control of the soft offline performed
by kernel's memory failure recovery.
Userspace can figure out the hugetlb folio pfn range by using `page-types -b huge
-rlN` and then decide whether to soft offline the page according to it. But for
GHES driver, I think it has to be done in the kernel. So add a control in /sys/
seems like a good idea.
Thanks.
This patch series give userspace the control of soft-offlining
HugeTLB pages: kernel only soft offlines hugepage if userspace has
opt-ed in for that specific hugepage size, and exposed to userspace
by a new sysfs entry called softoffline_corrected_errors under
/sys/kernel/mm/hugepages/hugepages-${size}kB directory:
* When softoffline_corrected_errors=0, skip soft offlining for all
hugepages of size ${size}kB.
* When softoffline_corrected_errors=1, soft offline as before this
Will it be better to be called as "soft_offline_corrected_errors" or simplify "soft_offline_enabled"?
"soft_offline_enabled" is less optimal as it can't be extended to
support something like "soft offline this PFN if something repeatedly
requested soft offline this exact PFN x times". (although I don't
think we need it).