Re: [PATCH v8 05/10] mm: thp: enable thp migration in generic path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11 Jul 2017, at 2:47, Naoya Horiguchi wrote:

> On Sat, Jul 01, 2017 at 09:40:03AM -0400, Zi Yan wrote:
>> From: Zi Yan <zi.yan@xxxxxxxxxxxxxx>
>>
>> This patch adds thp migration's core code, including conversions
>> between a PMD entry and a swap entry, setting PMD migration entry,
>> removing PMD migration entry, and waiting on PMD migration entries.
>>
>> This patch makes it possible to support thp migration.
>> If you fail to allocate a destination page as a thp, you just split
>> the source thp as we do now, and then enter the normal page migration.
>> If you succeed to allocate destination thp, you enter thp migration.
>> Subsequent patches actually enable thp migration for each caller of
>> page migration by allowing its get_new_page() callback to
>> allocate thps.
>>
>> ChangeLog v1 -> v2:
>> - support pte-mapped thp, doubly-mapped thp
>>
>> Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
>>
>> ChangeLog v2 -> v3:
>> - use page_vma_mapped_walk()
>> - use pmdp_huge_clear_flush() instead of pmdp_huge_get_and_clear() in
>>   set_pmd_migration_entry()
>>
>> ChangeLog v3 -> v4:
>> - factor out the code of removing pte pgtable page in zap_huge_pmd()
>>
>> ChangeLog v4 -> v5:
>> - remove unnecessary PTE-mapped THP code in remove_migration_pmd()
>>   and set_pmd_migration_entry()
>> - restructure the code in zap_huge_pmd() to avoid factoring out
>>   the pte pgtable page code
>> - in zap_huge_pmd(), check that PMD swap entries are migration entries
>> - change author information
>>
>> ChangeLog v5 -> v7
>> - use macro to disable the code when thp migration is not enabled
>>
>> ChangeLog v7 -> v8
>> - use IS_ENABLED instead of macro to make code look clean in
>>   zap_huge_pmd() and page_vma_mapped_walk()
>> - remove BUILD_BUG() in pmd_to_swp_entry() and swp_entry_to_pmd() to
>>   avoid compilation error
>> - rename variable 'migration' to 'flush_needed' and invert the logic in
>>   zap_huge_pmd() to make code more descriptive
>> - use pmdp_invalidate() in set_pmd_migration_entry() to avoid race
>>   with MADV_DONTNEED
>> - remove unnecessary tlb flush in remove_migration_pmd()
>> - add the missing migration flag check in page_vma_mapped_walk()
>>
>> Signed-off-by: Zi Yan <zi.yan@xxxxxxxxxxxxxx>
>> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
>> ---
>>  arch/x86/include/asm/pgtable_64.h |  2 +
>>  include/linux/swapops.h           | 67 ++++++++++++++++++++++++++++++-
>>  mm/huge_memory.c                  | 84 ++++++++++++++++++++++++++++++++++++---
>>  mm/migrate.c                      | 32 ++++++++++++++-
>>  mm/page_vma_mapped.c              | 18 +++++++--
>>  mm/pgtable-generic.c              |  3 +-
>>  mm/rmap.c                         | 13 ++++++
>>  7 files changed, 207 insertions(+), 12 deletions(-)
>>
> ...
>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 91948fbbb0bb..b28f633cd569 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1302,6 +1302,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>>  	bool ret = true;
>>  	enum ttu_flags flags = (enum ttu_flags)arg;
>>
>> +
>>  	/* munlock has nothing to gain from examining un-locked vmas */
>>  	if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED))
>>  		return true;
>> @@ -1312,6 +1313,18 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>>  	}
>>
>>  	while (page_vma_mapped_walk(&pvmw)) {
>> +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>> +		/* PMD-mapped THP migration entry */
>> +		if (flags & TTU_MIGRATION) {
>
> My testing based on mmotm-2017-07-06-16-18 showed that migrating shmem thp
> caused kernel crash. I don't think this is critical because that case is
> just not-prepared yet. So in order to avoid the crash, please add
> PageAnon(page) check here. This makes shmem thp migration just fail.
>
> +			if (!PageAnon(page))
> +				continue;
>

Thanks for your testing. I will add this check in my next version.


>> +			if (!pvmw.pte && page) {
>
> Just from curiosity, do we really need this page check?
> try_to_unmap() always passes down the parameter 'page' to try_to_unmap_one()
> via rmap_walk_* family, so I think we can assume page is always non-NULL.

You are right. Checking page is not necessary here. I will remove it in my
next version.



--
Best Regards
Yan Zi

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux