On 05.07.24 12:48, Lance Yang wrote:
Hi David and Barry,
Thanks a lot for paying attention!
On Fri, Jul 5, 2024 at 6:14 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
On 05.07.24 12:12, Barry Song wrote:
On Fri, Jul 5, 2024 at 9:08 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
@@ -3253,8 +3259,9 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
i_mmap_unlock_read(mapping);
out:
xas_destroy(&xas);
- if (is_thp)
+ if (order >= HPAGE_PMD_ORDER)
We likely should be using "== HPAGE_PMD_ORDER" here, to be safe for the
future.
I feel this might need to be separate since all other places are using
folio_test_pmd_mappable() ?
Likely, but as you are moving away from this ... this counter here does
and will always only care about HPAGE_PMD_ORDER.
I appreciate the different opinions on whether we should use
">= HPAGE_PMD_ORDER" or "==" for this check.
In this context, let's leave it as is and stay consistent with
folio_test_pmd_mappable() by using ">= HPAGE_PMD_ORDER",
what do you think?
I don't think it's a good idea to add more wrong code that is even
harder to grep (folio_test_pmd_mappable would give you candidates that
might need attention). But I don't care too much. Maybe someone here can
volunteer to clean up these instances to make sure we check PMD-size and
not PMD-mappable for these counters that are for PMD-sized folios only,
even in the future with larger folios?
--
Cheers,
David / dhildenb