On 2024/8/6 17:24, David Hildenbrand wrote:
On 06.08.24 05:44, Kefeng Wang wrote:
Hi David, I have some question,
On 2024/8/2 16:02, Kefeng Wang wrote:
...
*/
- if (PageHWPoison(page)) {
+ if (unlikely(PageHWPoison(page))) {
+ folio = page_folio(page);
+
if (WARN_ON(folio_test_lru(folio)))
folio_isolate_lru(folio);
+
if (folio_mapped(folio))
- try_to_unmap(folio, TTU_IGNORE_MLOCK);
+ unmap_posioned_folio(folio, TTU_IGNORE_MLOCK);
+
+ if (folio_test_large(folio))
+ pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
continue;
}
+ if (PageHuge(page)) {
+ pfn = page_to_pfn(head) + compound_nr(head) - 1;
+ isolate_hugetlb(folio, &source);
+ continue;
+ } else if (PageTransHuge(page))
If the page is a tail page, we will BUG_ON(DEBUG_VM enabled) here, but
it seems that we don't guarantee the page won't be a tail page.
Maybe at some point we might want to remove these sanity checks or have
explicit, expected-to-be-racy folio functions.
Like folio_test_hugetlb_racy(), folio_test_large_racy(),
folio_nr_pages_racy().
Because the VM_DEBUG checks for folio_test_large() etc. actually make
sense in other context where we know that concurrent splitting is
impossible.
But maybe part of the puzzle will be in the future that we want to do a
RCU read lock here and perform freeing/splitting under RCU, when we'll
also have to alloc/free the "struct folio".
OK, since we will convert above parts to use folio, so leave them for
the future.
+ pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
thp_nr_pages() need a head page, I think it should use head here, so we
can directly use folio_nr_pages().
If we can use a folio in the PageHWPoison() case, can we use one here
as well? I know that it's all unreliable when not holding a folio
reference, and we have to be a bit careful.
Using a folio here is part of patch4, I want to unify hugetlb/thp(or
large folio) with "pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1"
when large folio after get a ref.
Think it again, even the folio don't hold a ref(splitting concurrently
or something else), folio_nr_pages return incorrect, it won't cause
issue since we will loop and find movable pages again in
scan_movable_pages() and try to isolate pages, so directly use
if (folio_test_large(folio)) {
pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
if (folio_test_hugetlb(folio))
isolate_hugetlb(folio, &source);
}
Likely we should add a comment here that a large folio might get split
concurrently and that folio_nr_pages() might read garbage. But out loop
should handle that and we would revisit the split folio later.
Thanks for you advise, will add some comments in new version.