Re: [PATCH] mm: put the page into the correct list when shrink_page_list fails to reclaim.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019/10/29 2:47, Yang Shi wrote:
> On Mon, Oct 28, 2019 at 6:37 AM zhong jiang <zhongjiang@xxxxxxxxxx> wrote:
>> Recently, I notice an race case between mlock syscall and shrink_page_list.
>>
>> one cpu run mlock syscall to make an range of the vma locked in memory. And
>> The specified pages will escaped from evictable list from unevictable.
>> Meanwhile, another cpu scan and isolate the specified pages to reclaim.
>> shrink_page_list hold the page lock to shrink the page and follow_page_pte
>> will fails to get the page lock, hence we fails to mlock the page to make
>> it Unevictabled.
>>
>> shrink_page_list fails to reclaim the page due to some reason. it will putback
>> the page to evictable lru. But the page actually belongs to an locked range of
>> the vma. it is unreasonable to do that. It is better to put the page to unevictable
>> lru.
> Yes, there is definitely race between mlock() and vmscan, and in the
> above case it might stay in evictable LRUs one more round, but it
> should be not harmful since try_to_unmap() would move the page to
> unevictable list eventually.
The key is how to make sure try_to_unmap alway will be called before the page is freed.
It is possibility page_mapped(page) is false due to some condition.

Thanks,
zhong jiang
>> The patch set PageMlocked when mlock fails to get the page locked. shrink_page_list
>> fails to reclaim the page will putback to the correct list. if it success to reclaim
>> the page, we should ClearPageMlocked in time to prevent the warning from free_pages_prepare.
>>
>> Signed-off-by: zhong jiang <zhongjiang@xxxxxxxxxx>
>> ---
>>  mm/gup.c    | 28 ++++++++++++++++++----------
>>  mm/vmscan.c |  9 ++++++++-
>>  2 files changed, 26 insertions(+), 11 deletions(-)
>>
>> diff --git a/mm/gup.c b/mm/gup.c
>> index c2b3e11..c26d28c 100644
>> --- a/mm/gup.c
>> +++ b/mm/gup.c
>> @@ -283,16 +283,24 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
>>                  * handle it now - vmscan will handle it later if and
>>                  * when it attempts to reclaim the page.
>>                  */
>> -               if (page->mapping && trylock_page(page)) {
>> -                       lru_add_drain();  /* push cached pages to LRU */
>> -                       /*
>> -                        * Because we lock page here, and migration is
>> -                        * blocked by the pte's page reference, and we
>> -                        * know the page is still mapped, we don't even
>> -                        * need to check for file-cache page truncation.
>> -                        */
>> -                       mlock_vma_page(page);
>> -                       unlock_page(page);
>> +               if (page->mapping) {
>> +                       if (trylock_page(page)) {
>> +                               lru_add_drain();  /* push cached pages to LRU */
>> +                               /*
>> +                                * Because we lock page here, and migration is
>> +                                * blocked by the pte's page reference, and we
>> +                                * know the page is still mapped, we don't even
>> +                                * need to check for file-cache page truncation.
>> +                                */
>> +                               mlock_vma_page(page);
>> +                               unlock_page(page);
>> +                       } else {
>> +                               /*
>> +                                * Avoid putback the page to evictable list when
>> +                                * the page is in the locked vma.
>> +                                */
>> +                               SetPageMlocked(page);
>> +                       }
>>                 }
>>         }
>>  out:
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 1154b3a..f7d1301 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1488,8 +1488,15 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>>                  */
>>                 if (unlikely(PageTransHuge(page)))
>>                         (*get_compound_page_dtor(page))(page);
>> -               else
>> +               else {
>> +                       /*
>> +                        * There is an race between mlock and shrink_page_list
>> +                        * when mlock fails to get the PageLocked().
>> +                        */
>> +                       if (unlikely(PageMlocked(page)))
>> +                               ClearPageMlocked(page);
>>                         list_add(&page->lru, &free_pages);
>> +               }
>>                 continue;
>>
>>  activate_locked_split:
>> --
>> 1.7.12.4
>>
>>
> .
>






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux