On Fri, Jun 07, 2024 at 11:09:37AM +0200, David Hildenbrand wrote: > We currently initialize the memmap such that PG_reserved is set and the > refcount of the page is 1. In virtio-mem code, we have to manually clear > that PG_reserved flag to make memory offlining with partially hotplugged > memory blocks possible: has_unmovable_pages() would otherwise bail out on > such pages. > > We want to avoid PG_reserved where possible and move to typed pages > instead. Further, we want to further enlighten memory offlining code about > PG_offline: offline pages in an online memory section. One example is > handling managed page count adjustments in a cleaner way during memory > offlining. > > So let's initialize the pages with PG_offline instead of PG_reserved. > generic_online_page()->__free_pages_core() will now clear that flag before > handing that memory to the buddy. > > Note that the page refcount is still 1 and would forbid offlining of such > memory except when special care is take during GOING_OFFLINE as > currently only implemented by virtio-mem. > > With this change, we can now get non-PageReserved() pages in the XEN > balloon list. From what I can tell, that can already happen via > decrease_reservation(), so that should be fine. > > HV-balloon should not really observe a change: partial online memory > blocks still cannot get surprise-offlined, because the refcount of these > PageOffline() pages is 1. > > Update virtio-mem, HV-balloon and XEN-balloon code to be aware that > hotplugged pages are now PageOffline() instead of PageReserved() before > they are handed over to the buddy. > > We'll leave the ZONE_DEVICE case alone for now. > > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 27e3be75edcf7..0254059efcbe1 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -734,7 +734,7 @@ static inline void section_taint_zone_device(unsigned long pfn) > /* > * Associate the pfn range with the given zone, initializing the memmaps > * and resizing the pgdat/zone data to span the added pages. After this > - * call, all affected pages are PG_reserved. > + * call, all affected pages are PageOffline(). > * > * All aligned pageblocks are initialized to the specified migratetype > * (usually MIGRATE_MOVABLE). Besides setting the migratetype, no related > @@ -1100,8 +1100,12 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages, > > move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE); > > - for (i = 0; i < nr_pages; i++) > - SetPageVmemmapSelfHosted(pfn_to_page(pfn + i)); > + for (i = 0; i < nr_pages; i++) { > + struct page *page = pfn_to_page(pfn + i); > + > + __ClearPageOffline(page); > + SetPageVmemmapSelfHosted(page); So, refresh my memory here please. AFAIR, those VmemmapSelfHosted pages were marked Reserved before, but now, memmap_init_range() will not mark them reserved anymore. I do not think that is ok? I am worried about walkers getting this wrong. We usually skip PageReserved pages in walkers because are pages we cannot deal with for those purposes, but with this change, we will leak PageVmemmapSelfHosted, and I am not sure whether are ready for that. Moreover, boot memmap pages are marked as PageReserved, which would be now inconsistent with those added during hotplug operations. All in all, I feel uneasy about this change. -- Oscar Salvador SUSE Labs