Re: + mm-introduce-reported-pages.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/12/19 11:18 AM, Alexander Duyck wrote:
> On Tue, 2019-11-12 at 10:19 -0500, Nitesh Narayan Lal wrote:
>> On 11/11/19 5:00 PM, Alexander Duyck wrote:
>>> On Mon, Nov 11, 2019 at 10:52 AM Nitesh Narayan Lal <nitesh@xxxxxxxxxx> wrote:
>>>> On 11/6/19 7:16 AM, Michal Hocko wrote:
>>>>> I didn't have time to read through newer versions of this patch series
>>>>> but I remember there were concerns about this functionality being pulled
>>>>> into the page allocator previously both by me and Mel [1][2]. Have those been
>>>>> addressed? I do not see an ack from Mel or any other MM people. Is there
>>>>> really a consensus that we want something like that living in the
>>>>> allocator?
>>>>>
>>>>> There has also been a different approach discussed and from [3]
>>>>> (referenced by the cover letter) I can only see
>>>>>
>>>>> : Then Nitesh's solution had changed to the bitmap approach[7]. However it
>>>>> : has been pointed out that this solution doesn't deal with sparse memory,
>>>>> : hotplug, and various other issues.
>>>>>
>>>>> which looks more like something to be done than a fundamental
>>>>> roadblocks.
>>>>>
>>>>> [1] http://lkml.kernel.org/r/20190912163525.GV2739@xxxxxxxxxxxxxxxxxxx
>>>>> [2] http://lkml.kernel.org/r/20190912091925.GM4023@xxxxxxxxxxxxxx
>>>>> [3] http://lkml.kernel.org/r/29f43d5796feed0dec8e8bb98b187d9dac03b900.camel@xxxxxxxxxxxxxxx
>>>>>
>>>> [...]
>>>>
>>>> Hi,
>>>>
>>>> I performed some experiments to find the root cause for the performance
>>>> degradation Alexander reported with my v12 patch-set. [1]
>>>>
>>>> I will try to give a brief background of the previous discussion
>>>> under v12: (Alexander can correct me if I am missing something).
>>>> Alexander suggested two issues with my v12 posting: [2]
>>>> (This is excluding the sparse zone and memory hotplug/hotremove support)
>>>>
>>>> - A crash which was caused because I was not using spinlock_irqsave()
>>>>   (Fix suggestion came from Alexander).
>>>>
>>>> - Performance degradation with Alexander's suggested setup. Where we are using
>>>>   modified will-it-scale/page_fault with THP, CONFIG_SLAB_FREELIST_RANDOM &
>>>>   CONFIG_SHUFFLE_PAGE_ALLOCATOR. When I was using (MAX_ORDER - 2) as the
>>>>   PAGE_REPORTING_MIN_ORDER, I also observed significant performance degradation
>>>>   (around 20% in the number of threads launched on the 16th vCPU). However, on
>>>>   switching the PAGE_REPORTING_MIN_ORDER to (MAX_ORDER - 1), I was able to get
>>>>   the performance similar to what Alexander is reporting.
>>>>
>>>> PAGE_REPORTING_MIN_ORDER: is the minimum order of a page to be captured in the
>>>> bitmap and get reported to the hypervisor.
>>>>
>>>> For the discussion where we are comparing the two series, the performance
>>>> aspect is more relevant and important.
>>>> It turns out that with the current implementation the number of vmexit with
>>>> PAGE_REPORTING_MIN_ORDER as pageblock_order or (MAX_ORDER - 2) are significantly
>>>> large when compared to (MAX_ODER - 1).
>>>>
>>>> One of the reason could be that the lower order pages are not getting sufficient
>>>> time to merge with each other as a result they are somehow getting reported
>>>> with 2 separate reporting requests. Hence, generating more vmexits. Where
>>>> as with (MAX_ORDER - 1) we don't have that kind of situation as I never try
>>>> to report any page which has order < (MAX_ORDER - 1).
>>>>
>>>> To fix this, I might have to further limit the reporting which could allow the
>>>> lower order pages to further merge and hence reduce the VM exits. I will try to
>>>> do some experiments to see if I can fix this. In any case, if anyone has a
>>>> suggestion I would be more than happy to look in that direction.
>>> That doesn't make any sense. My setup using MAX_ORDER - 2, aka
>>> pageblock_order, as the limit doesn't experience the same performance
>>> issues the bitmap solution does. That leads me to believe the issue
>>> isn't that the pages have not had a chance to be merged.
>>>
>> So, I did run your series as well with a few syfs variables to see how many
>> pages of order (MAX_ORDER - 1) or (MAX_ORDER - 2) are reported at the end of
>> will-it-scale/page_fault4 test.
>> What I observed is the number of (MAX_ORDER - 2) pages which were getting
>> reported in your case were lesser than what has been reported in mine with
>> pageblock_order.
>> As you have mentioned below about putting pages in a certain part of the
>> free list might have also an impact.
> Another thing you may want to check is how often your notifier is
> triggering. One thing I did was to intentionally put a fairly significant
> delay from the time the notification is scheduled to when it will start. I
> did this because when an application is freeing memory it will take some
> time to completely free it, and if it is going to reallocate it anyway
> there is no need to rush since it would just invalidate the pages you
> reported anyway.

Yes, I agree with this. This could have an impact on the performance.

>
>>>> Following are the numbers I gathered on a 30GB single NUMA, 16 vCPU guest
>>>> affined to a single host-NUMA:
>>>>
>>>> On 16th vCPU:
>>>> With PAGE_REPORTING_MIN_ORDER as (MAX_ORDER - 1):
>>>> % Dip on the number of Processes = 1.3 %
>>>> % Dip on the number of  Threads  = 5.7 %
>>>>
>>>> With PAGE_REPORTING_MIN_ORDER as With (pageblock_order):
>>>> % Dip on the number of Processes = 5 %
>>>> % Dip on the number of  Threads  = 20 %
>>> So I don't hold much faith in the threads numbers. I have seen the
>>> variability be as high as 14% between runs.
>> That's interesting. Do you see the variability even with an unmodified kernel?
>> Somehow, for me it seems pretty consistent. However, if you are running with
>> multiple NUMA nodes it might have a significant impact on the numbers.
>>
>> For now, I am only running a single NUMA guest affined to a single NUMA
>> of host.
> My guest should be running in a single node, and yes I saw it with just
> the unmodified kernel. I am running on the linux-next 20191031 kernel.

I am using Linus linux tree and working on top of Linux 5.4-rc5.
Not sure how much difference will that make.

>  It
> did occur to me that it seems like the performance for the threads number
> recently increased. There might be a guest config option impacting things
> as well since I know I have changed a number of variables since then.

This is quite interesting because if I remember correctly then you reported a
huge degradation of over 30% with my patch-set.
So far, I was able to reproduce significant degradation with the number of
threads launched on the 16th vcpu but not in the number of processes which you
are observing.
I am wondering if I am still missing something in my test-setup.

>
>>>> Michal's suggestion:
>>>> I was able to get the prototype which could use page-isolation API:
>>>> start_isolate_page_range()/undo_isolate_page_range() to work.
>>>> But the issue mentioned above was also evident with it.
>>>>
>>>> Hence, I think before moving to the decision whether I want to use
>>>> __isolate_free_page() which isolates pages from the buddy or
>>>> start/undo_isolate_page_range() which just marks the page as MIGRATE_ISOLATE,
>>>> it is important for me to resolve the above-mentioned issue.
>>> I'd be curious how you are avoiding causing memory starvation if you
>>> are isolating ranges of memory that have been recently freed.
>> I would still be marking only 32 pages as MIGRATE_ISOLATE at a time. It is
>> exactly same as that of isolating limited chunk of pages from the buddy.
>> For example if I have a pfn:x of order y then I pass
>> start_isolate_page_range(x, x+y, mt, 0). So at the end we
>> will have 32 such entries marked as MIGRATE_ISOLATE.
> I get that you are isolating the same amount of memory. What I was getting
> at is that __isolate_free_page has a check in it to make certain you are
> not pulling memory that would put you below the minimum watermark. As far
> as I know there isn't anything like that for the page isolation framework
> since it is normally used for offlining memory before it is hotplugged
> away.

Yes, that is correct. I will have to take care of that explicitly.

>
>>>> Previous discussions:
>>>> More about how we ended up with these two approaches could be found at [3] &
>>>> [4] explained by Alexander & David.
>>>>
>>>> [1] https://lore.kernel.org/lkml/20190812131235.27244-1-nitesh@xxxxxxxxxx/
>>>> [2] https://lkml.org/lkml/2019/10/2/425
>>>> [3] https://lkml.org/lkml/2019/10/23/1166
>>>> [4] https://lkml.org/lkml/2019/9/12/48
>>>>
>>> So one thing you may want to consider would be how placement of the
>>> buffers will impact your performance.
>>>
>>> One thing I realized I was doing wrong with my approach was scanning
>>> for pages starting at the tail and then working up. It greatly hurt
>>> the efficiency of my search since in the standard case most of the
>>> free memory will be placed at the head and only with shuffling enabled
>>> do I really need to worry about things getting mixed up with the tail.
>>>
>>> I suspect you may be similarly making things more difficult for
>>> yourself by placing the reported pages back on the head of the list
>>> instead of placing them at the tail where they will not be reallocated
>>> immediately.
>> hmm, I see. I will try and explore this.
>>
>
-- 
Thanks
Nitesh






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux