Re: [PATCH] mm: Reuse only-pte-mapped KSM page in do_wp_page()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30.12.2018 00:40, Andrew Morton wrote:
> On Thu, 13 Dec 2018 18:29:08 +0300 Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> wrote:
> 
>> This patch adds an optimization for KSM pages almost
>> in the same way, that we have for ordinary anonymous
>> pages. If there is a write fault in a page, which is
>> mapped to an only pte, and it is not related to swap
>> cache; the page may be reused without copying its
>> content.
>>
>> [Note, that we do not consider PageSwapCache() pages
>>  at least for now, since we don't want to complicate
>>  __get_ksm_page(), which has nice optimization based
>>  on this (for the migration case). Currenly it is
>>  spinning on PageSwapCache() pages, waiting for when
>>  they have unfreezed counters (i.e., for the migration
>>  finish). But we don't want to make it also spinning
>>  on swap cache pages, which we try to reuse, since
>>  there is not a very high probability to reuse them.
>>  So, for now we do not consider PageSwapCache() pages
>>  at all.]
>>
>> So, in reuse_ksm_page() we check for 1)PageSwapCache()
>> and 2)page_stable_node(), to skip a page, which KSM
>> is currently trying to link to stable tree. Then we
>> do page_ref_freeze() to prohibit KSM to merge one more
>> page into the page, we are reusing. After that, nobody
>> can refer to the reusing page: KSM skips !PageSwapCache()
>> pages with zero refcount; and the protection against
>> of all other participants is the same as for reused
>> ordinary anon pages pte lock, page lock and mmap_sem.
>>
>> ...
>>
>> +bool reuse_ksm_page(struct page *page,
>> +		    struct vm_area_struct *vma,
>> +		    unsigned long address)
>> +{
>> +	VM_BUG_ON_PAGE(is_zero_pfn(page_to_pfn(page)), page);
>> +	VM_BUG_ON_PAGE(!page_mapped(page), page);
>> +	VM_BUG_ON_PAGE(!PageLocked(page), page);
>> +
>> +	if (PageSwapCache(page) || !page_stable_node(page))
>> +		return false;
>> +	/* Prohibit parallel get_ksm_page() */
>> +	if (!page_ref_freeze(page, 1))
>> +		return false;
>> +
>> +	page_move_anon_rmap(page, vma);
>> +	page->index = linear_page_index(vma, address);
>> +	page_ref_unfreeze(page, 1);
>> +
>> +	return true;
>> +}
> 
> Can we avoid those BUG_ON()s?
> 
> Something like this:
> 
> --- a/mm/ksm.c~mm-reuse-only-pte-mapped-ksm-page-in-do_wp_page-fix
> +++ a/mm/ksm.c
> @@ -2649,9 +2649,14 @@ bool reuse_ksm_page(struct page *page,
>  		    struct vm_area_struct *vma,
>  		    unsigned long address)
>  {
> -	VM_BUG_ON_PAGE(is_zero_pfn(page_to_pfn(page)), page);
> -	VM_BUG_ON_PAGE(!page_mapped(page), page);
> -	VM_BUG_ON_PAGE(!PageLocked(page), page);
> +#ifdef CONFIG_DEBUG_VM
> +	if (WARN_ON(is_zero_pfn(page_to_pfn(page))) ||
> +			WARN_ON(!page_mapped(page)) ||
> +			WARN_ON(!PageLocked(page))) {
> +		dump_page(page, "reuse_ksm_page");
> +		return false;
> +	}
> +#endif

Looks good!
  
>  	if (PageSwapCache(page) || !page_stable_node(page))
>  		return false;
> 
> We don't have a VM_WARN_ON_PAGE() and we can't provide one because the
> VM_foo() macros don't return a value.  It's irritating and I keep
> forgetting why we ended up doing them this way.
Thanks!




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux