On 19.11.24 17:12, Zi Yan wrote:
On 19 Nov 2024, at 10:29, David Hildenbrand wrote:
+/* Split a multi-block free page into its individual pageblocks. */
+static void split_large_buddy(struct zone *zone, struct page *page,
+ unsigned long pfn, int order, fpi_t fpi)
+{
+ unsigned long end = pfn + (1 << order);
+
+ VM_WARN_ON_ONCE(!IS_ALIGNED(pfn, 1 << order));
+ /* Caller removed page from freelist, buddy info cleared! */
+ VM_WARN_ON_ONCE(PageBuddy(page));
+
+ if (order > pageblock_order)
+ order = pageblock_order;
+
+ while (pfn != end) {
+ int mt = get_pfnblock_migratetype(page, pfn);
+
+ __free_one_page(page, pfn, zone, order, mt, fpi);
+ pfn += 1 << order;
+ page = pfn_to_page(pfn);
+ }
+}
Hi,
stumbling over this while digging through the code ....
+
static void free_one_page(struct zone *zone, struct page *page,
unsigned long pfn, unsigned int order,
fpi_t fpi_flags)
{
unsigned long flags;
- int migratetype;
spin_lock_irqsave(&zone->lock, flags);
- migratetype = get_pfnblock_migratetype(page, pfn);
- __free_one_page(page, pfn, zone, order, migratetype, fpi_flags);
This change is rather undesired:
via __free_pages_core()->__free_pages_ok() we can easily end up here with order=MAX_PAGE_ORDER.
Do you have a concrete example? PMD THP on x86_64 is pageblock_order.
We do not have PMD level mTHP yet. Any other possible source?
Memory init during boot. See deferred_free_pages() and
__free_pages_memory()->memblock_free_pages().
So this is used for exposing most memory during boot to the buddy in
MAX_PAGE_ORDER granularity.
The other is memory hotplug via generic_online_pages().
--
Cheers,
David / dhildenb