On 25.07.19 13:46, Nitesh Narayan Lal wrote: > > On 7/25/19 4:53 AM, David Hildenbrand wrote: >> On 24.07.19 19:03, Alexander Duyck wrote: >>> From: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx> >>> >>> In order to pave the way for free page hinting in virtualized environments >>> we will need a way to get pages out of the free lists and identify those >>> pages after they have been returned. To accomplish this, this patch adds >>> the concept of a Hinted Buddy, which is essentially meant to just be the >>> Offline page type used in conjunction with the Buddy page type. >>> >>> It adds a set of pointers we shall call "boundary" which represents the >>> upper boundary between the unhinted and hinted pages. The general idea is >>> that in order for a page to cross from one side of the boundary to the >>> other it will need to go through the hinting process. Ultimately a >>> free_list has been fully processed when the boundary has been moved from >>> the tail all they way up to occupying the first entry in the list. >>> >>> Doing this we should be able to make certain that we keep the hinted >>> pages as one contiguous block in each free list. This will allow us to >>> efficiently manipulate the free lists whenever we need to go in and start >>> sending hints to the hypervisor that there are new pages that have been >>> freed and are no longer in use. >>> >>> An added advantage to this approach is that we should be reducing the >>> overall memory footprint of the guest as it will be more likely to recycle >>> warm pages versus trying to allocate the hinted pages that were likely >>> evicted from the guest memory. >>> >>> Since we will only be hinting one zone at a time we keep the boundary >>> limited to being defined for just the zone we are currently placing hinted >>> pages into. Doing this we can keep the number of additional pointers needed >>> quite small. To flag that the boundaries are in place we use a single bit >>> in the zone to indicate that hinting and the boundaries are active. >>> >>> The determination of when to start hinting is based on the tracking of the >>> number of free pages in a given area versus the number of hinted pages in >>> that area. We keep track of the number of hinted pages per free_area in a >>> separate zone specific area. We do this to avoid modifying the free_area >>> structure as this can lead to false sharing for the highest order with the >>> zone lock which leads to a noticeable performance degradation. >>> >>> Signed-off-by: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx> >>> --- >>> include/linux/mmzone.h | 40 +++++- >>> include/linux/page-flags.h | 8 + >>> include/linux/page_hinting.h | 139 ++++++++++++++++++++ >>> mm/Kconfig | 5 + >>> mm/Makefile | 1 >>> mm/memory_hotplug.c | 1 >>> mm/page_alloc.c | 136 ++++++++++++++++++- >>> mm/page_hinting.c | 298 ++++++++++++++++++++++++++++++++++++++++++ >>> 8 files changed, 620 insertions(+), 8 deletions(-) >>> create mode 100644 include/linux/page_hinting.h >>> create mode 100644 mm/page_hinting.c >>> >>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h >>> index f0c68b6b6154..42bdebb20484 100644 >>> --- a/include/linux/mmzone.h >>> +++ b/include/linux/mmzone.h >>> @@ -460,6 +460,14 @@ struct zone { >>> seqlock_t span_seqlock; >>> #endif >>> >>> +#ifdef CONFIG_PAGE_HINTING >>> + /* >>> + * Pointer to hinted page tracking statistics array. The size of >>> + * the array is MAX_ORDER - PAGE_HINTING_MIN_ORDER. NULL when >>> + * page hinting is not present. >>> + */ >>> + unsigned long *hinted_pages; >>> +#endif >>> int initialized; >>> >>> /* Write-intensive fields used from the page allocator */ >>> @@ -535,6 +543,14 @@ enum zone_flags { >>> ZONE_BOOSTED_WATERMARK, /* zone recently boosted watermarks. >>> * Cleared when kswapd is woken. >>> */ >>> + ZONE_PAGE_HINTING_REQUESTED, /* zone enabled page hinting and has >>> + * requested flushing the data out of >>> + * higher order pages. >>> + */ >>> + ZONE_PAGE_HINTING_ACTIVE, /* zone enabled page hinting and is >>> + * activly flushing the data out of >>> + * higher order pages. >>> + */ >>> }; >>> >>> static inline unsigned long zone_managed_pages(struct zone *zone) >>> @@ -755,6 +771,8 @@ static inline bool pgdat_is_empty(pg_data_t *pgdat) >>> return !pgdat->node_start_pfn && !pgdat->node_spanned_pages; >>> } >>> >>> +#include <linux/page_hinting.h> >>> + >>> /* Used for pages not on another list */ >>> static inline void add_to_free_list(struct page *page, struct zone *zone, >>> unsigned int order, int migratetype) >>> @@ -769,10 +787,16 @@ static inline void add_to_free_list(struct page *page, struct zone *zone, >>> static inline void add_to_free_list_tail(struct page *page, struct zone *zone, >>> unsigned int order, int migratetype) >>> { >>> - struct free_area *area = &zone->free_area[order]; >>> + struct list_head *tail = get_unhinted_tail(zone, order, migratetype); >>> >>> - list_add_tail(&page->lru, &area->free_list[migratetype]); >>> - area->nr_free++; >>> + /* >>> + * To prevent the unhinted pages from being interleaved with the >>> + * hinted ones while we are actively processing pages we will use >>> + * the head of the hinted pages to determine the tail of the free >>> + * list. >>> + */ >>> + list_add_tail(&page->lru, tail); >>> + zone->free_area[order].nr_free++; >>> } >>> >>> /* Used for pages which are on another list */ >>> @@ -781,12 +805,22 @@ static inline void move_to_free_list(struct page *page, struct zone *zone, >>> { >>> struct free_area *area = &zone->free_area[order]; >>> >>> + /* >>> + * Clear Hinted flag, if present, to avoid placing hinted pages >>> + * at the top of the free_list. It is cheaper to just process this >>> + * page again, then have to walk around a page that is already hinted. >>> + */ >>> + clear_page_hinted(page, zone); >>> + >>> list_move(&page->lru, &area->free_list[migratetype]); >>> } >>> >>> static inline void del_page_from_free_list(struct page *page, struct zone *zone, >>> unsigned int order) >>> { >>> + /* Clear Hinted flag, if present, before clearing the Buddy flag */ >>> + clear_page_hinted(page, zone); >>> + >>> list_del(&page->lru); >>> __ClearPageBuddy(page); >>> set_page_private(page, 0); >>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h >>> index b848517da64c..b753dbf673cb 100644 >>> --- a/include/linux/page-flags.h >>> +++ b/include/linux/page-flags.h >>> @@ -745,6 +745,14 @@ static inline int page_has_type(struct page *page) >>> PAGE_TYPE_OPS(Offline, offline) >>> >>> /* >>> + * PageHinted() is an alias for Offline, however it is not meant to be an >>> + * exclusive value. It should be combined with PageBuddy() when seen as it >>> + * is meant to indicate that the page has been scrubbed while waiting in >>> + * the buddy system. >>> + */ >>> +PAGE_TYPE_OPS(Hinted, offline) >> >> CCing Matthew >> >> I am still not sure if I like the idea of having two page types at a time. >> >> 1. Once we run out of page type bits (which can happen easily looking at >> it getting more and more user - e.g., maybe for vmmap pages soon), we >> might want to convert again back to a value-based, not bit-based type >> detection. This will certainly make this switch harder. >> >> 2. It will complicate the kexec/kdump handling. I assume it can be fixed >> some way - e.g., making the elf interface aware of the exact notion of >> page type bits compared to mapcount values we have right now (e.g., >> PAGE_BUDDY_MAPCOUNT_VALUE). Not addressed in this series yet. >> >> >> Can't we reuse one of the traditional page flags for that, not used >> along with buddy pages? E.g., PG_dirty: Pages that were not hinted yet >> are dirty. > > Will it not conflict with the regular use case of PG_dirty bit somehow? AFAIK it is primarily used for pagecache pages only, so never with pages in the buddy. Unfortunately, page-flags.h lacks proper documentation. -- Thanks, David / dhildenb