On 07.03.19 20:23, Nitesh Narayan Lal wrote: > > On 3/7/19 1:30 PM, Alexander Duyck wrote: >> On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@xxxxxxxxxx> wrote: >>> This patch enables the kernel to scan the per cpu array >>> which carries head pages from the buddy free list of order >>> FREE_PAGE_HINTING_MIN_ORDER (MAX_ORDER - 1) by >>> guest_free_page_hinting(). >>> guest_free_page_hinting() scans the entire per cpu array by >>> acquiring a zone lock corresponding to the pages which are >>> being scanned. If the page is still free and present in the >>> buddy it tries to isolate the page and adds it to a >>> dynamically allocated array. >>> >>> Once this scanning process is complete and if there are any >>> isolated pages added to the dynamically allocated array >>> guest_free_page_report() is invoked. However, before this the >>> per-cpu array index is reset so that it can continue capturing >>> the pages from buddy free list. >>> >>> In this patch guest_free_page_report() simply releases the pages back >>> to the buddy by using __free_one_page() >>> >>> Signed-off-by: Nitesh Narayan Lal <nitesh@xxxxxxxxxx> >> I'm pretty sure this code is not thread safe and has a few various issues. >> >>> --- >>> include/linux/page_hinting.h | 5 ++ >>> mm/page_alloc.c | 2 +- >>> virt/kvm/page_hinting.c | 154 +++++++++++++++++++++++++++++++++++ >>> 3 files changed, 160 insertions(+), 1 deletion(-) >>> >>> diff --git a/include/linux/page_hinting.h b/include/linux/page_hinting.h >>> index 90254c582789..d554a2581826 100644 >>> --- a/include/linux/page_hinting.h >>> +++ b/include/linux/page_hinting.h >>> @@ -13,3 +13,8 @@ >>> >>> void guest_free_page_enqueue(struct page *page, int order); >>> void guest_free_page_try_hinting(void); >>> +extern int __isolate_free_page(struct page *page, unsigned int order); >>> +extern void __free_one_page(struct page *page, unsigned long pfn, >>> + struct zone *zone, unsigned int order, >>> + int migratetype); >>> +void release_buddy_pages(void *obj_to_free, int entries); >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>> index 684d047f33ee..d38b7eea207b 100644 >>> --- a/mm/page_alloc.c >>> +++ b/mm/page_alloc.c >>> @@ -814,7 +814,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, >>> * -- nyc >>> */ >>> >>> -static inline void __free_one_page(struct page *page, >>> +inline void __free_one_page(struct page *page, >>> unsigned long pfn, >>> struct zone *zone, unsigned int order, >>> int migratetype) >>> diff --git a/virt/kvm/page_hinting.c b/virt/kvm/page_hinting.c >>> index 48b4b5e796b0..9885b372b5a9 100644 >>> --- a/virt/kvm/page_hinting.c >>> +++ b/virt/kvm/page_hinting.c >>> @@ -1,5 +1,9 @@ >>> #include <linux/mm.h> >>> #include <linux/page_hinting.h> >>> +#include <linux/page_ref.h> >>> +#include <linux/kvm_host.h> >>> +#include <linux/kernel.h> >>> +#include <linux/sort.h> >>> >>> /* >>> * struct guest_free_pages- holds array of guest freed PFN's along with an >>> @@ -16,6 +20,54 @@ struct guest_free_pages { >>> >>> DEFINE_PER_CPU(struct guest_free_pages, free_pages_obj); >>> >>> +/* >>> + * struct guest_isolated_pages- holds the buddy isolated pages which are >>> + * supposed to be freed by the host. >>> + * @pfn: page frame number for the isolated page. >>> + * @order: order of the isolated page. >>> + */ >>> +struct guest_isolated_pages { >>> + unsigned long pfn; >>> + unsigned int order; >>> +}; >>> + >>> +void release_buddy_pages(void *obj_to_free, int entries) >>> +{ >>> + int i = 0; >>> + int mt = 0; >>> + struct guest_isolated_pages *isolated_pages_obj = obj_to_free; >>> + >>> + while (i < entries) { >>> + struct page *page = pfn_to_page(isolated_pages_obj[i].pfn); >>> + >>> + mt = get_pageblock_migratetype(page); >>> + __free_one_page(page, page_to_pfn(page), page_zone(page), >>> + isolated_pages_obj[i].order, mt); >>> + i++; >>> + } >>> + kfree(isolated_pages_obj); >>> +} >> You shouldn't be accessing __free_one_page without holding the zone >> lock for the page. You might consider confining yourself to one zone >> worth of hints at a time. Then you can acquire the lock once, and then >> return the memory you have freed. > That is correct. >> >> This is one of the reasons why I am thinking maybe a bit in the page >> and then spinning on that bit in arch_alloc_page might be a nice way >> to get around this. Then you only have to take the zone lock when you >> are finding the pages you want to hint on and setting the bit >> indicating they are mid hint. Otherwise you have to take the zone lock >> to pull pages out, and to put them back in and the likelihood of a >> lock collision is much higher. > Do you think adding a new flag to the page structure will be acceptable? My lesson learned: forget it. If (at all) reuse some other one that might be safe in that context. Hard to tell if that is even possible and will be accepted upstream. Spinning is not the solution. What you would want is the buddy to actually skip over these pages and only try to use them (-> spin) when OOM. Core mm changes (see my other reply). This all sounds like future work which can be built on top of this work. -- Thanks, David / dhildenb