On 10/2/19 10:25 AM, Alexander Duyck wrote: [...] >>> My suggestion would be to look at reworking the patch set and >>> post numbers for my patch set versus the bitmap approach and we can >>> look at them then. >> Agreed. However, in order to fix an issue I have to reproduce it first. > With the tweak I have suggested above it should make it much easier to > reproduce. Basically all you need is to have the allocation competing > against hinting. Currently the hinting isn't doing this because the > allocations are mostly coming out of 4K pages instead of higher order > ones. > > Alternatively you could just make the suggestion I had proposed about > using spin_lock/unlock_irq in your worker thread and that resolved it > for me. > >>> I would prefer not to spend my time fixing and >>> tuning a patch set that I am still not convinced is viable. >> You don't have to, I can fix the issues in my patch-set. :) > Sounds good. Hopefully the stuff I pointed out above helps you to get > a reproduction and resolve the issues. So I did observe a significant drop in running my v12 path-set [1] with the suggested test setup. However, on making certain changes the performance improved significantly. I used my v12 patch-set which I have posted earlier and made the following changes: 1. Started reporting only (MAX_ORDER - 1) pages and increased the number of pages that can be reported at a time to 32 from 16. The intent of making these changes was to bring my configuration closer to what Alexander is using. 2. I made an additional change in my bitmap scanning logic to prevent acquiring spinlock if the page is already allocated. Setup: On a 16 vCPU 30 GB single NUMA guest affined to a single host NUMA, I ran the modified will-it-scale/page_fault number of times and calculated the average of the number of process and threads launched on the 16th core to compare the impact of my patch-set against an unmodified kernel. Conclusion: %Drop in number of processes launched on 16th vCPU = 1-2% %Drop in number of threads launched on 16th vCPU = 5-6% Other observations: - I also tried running Alexander's latest v11 page-reporting patch set and observe a similar amount of average degradation in the number of processes and threads. - I didn't include the linear component recorded by will-it-scale because for some reason it was fluctuating too much even when I was using an unmodified kernel. If required I can investigate this further. Note: If there is a better way to analyze the will-it-scale/page_fault results then please do let me know. Other setup details: Following are the configurations which I enabled to run my tests: - Enabled: CONFIG_SLAB_FREELIST_RANDOM & CONFIG_SHUFFLE_PAGE_ALLOCATOR - Set host THP to always - Set guest THP to madvise - Added the suggested madvise call in page_fault source code. @Alexander please let me know if I missed something. The current state of my v13: I still have to look into Michal's suggestion of using page-isolation API's instead of isolating the page. However, I believe at this moment our objective is to decide with which approach we can proceed and that's why I decided to post the numbers by making small required changes in v12 instead of posting a new series. Following are the changes which I have made on top of my v12: page_reporting.h change: -#define PAGE_REPORTING_MIN_ORDER (MAX_ORDER - 2) -#define PAGE_REPORTING_MAX_PAGES 16 +#define PAGE_REPORTING_MIN_ORDER (MAX_ORDER - 1) +#define PAGE_REPORTING_MAX_PAGES 32 page_reporting.c change: @@ -101,8 +101,12 @@ static void scan_zone_bitmap(struct page_reporting_config *phconf, /* Process only if the page is still online */ page = pfn_to_online_page((setbit << PAGE_REPORTING_MIN_ORDER) + zone->base_pfn); - if (!page) + if (!page || !PageBuddy(page)) { + clear_bit(setbit, zone->bitmap); + atomic_dec(&zone->free_pages); continue; + } @Alexander in case you decide to give it a try and find different results, please do let me know. [1] https://lore.kernel.org/lkml/20190812131235.27244-1-nitesh@xxxxxxxxxx/ -- Thanks Nitesh