Re: [PATCH v3] ksm: Assist buddy allocator to assemble 1-order pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 19 Oct 2018 15:33:39 +0300 Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> wrote:

> v3: Comment improvements.
> v2: Style improvements.
> 
> try_to_merge_two_pages() merges two pages, one of them
> is a page of currently scanned mm, the second is a page
> with identical hash from unstable tree. Currently, we
> merge the page from unstable tree into the first one,
> and then free it.
> 
> The idea of this patch is to prefer freeing that page
> of them, which has a free neighbour (i.e., neighbour
> with zero page_count()). This allows buddy allocator
> to assemble at least 1-order set from the freed page
> and its neighbour; this is a kind of cheep passive
> compaction.
> 
> AFAIK, 1-order pages set consists of pages with PFNs
> [2n, 2n+1] (odd, even), so the neighbour's pfn is
> calculated via XOR with 1. We check the result pfn
> is valid and its page_count(), and prefer merging
> into @tree_page if neighbour's usage count is zero.
> 
> There a is small difference with current behavior
> in case of error path. In case of the second
> try_to_merge_with_ksm_page() is failed, we return
> from try_to_merge_two_pages() with @tree_page
> removed from unstable tree. It does not seem to matter,
> but if we do not want a change at all, it's not
> a problem to move remove_rmap_item_from_tree() from
> try_to_merge_with_ksm_page() to its callers.
>

Seems sensible.

> 
> ...
>
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1321,6 +1321,23 @@ static struct page *try_to_merge_two_pages(struct rmap_item *rmap_item,
>  {
>  	int err;
>  
> +	if (IS_ENABLED(CONFIG_COMPACTION)) {
> +		unsigned long pfn;
> +
> +		/*
> +		 * Find neighbour of @page containing 1-order pair in buddy
> +		 * allocator and check whether its count is 0. If so, we
> +		 * consider the neighbour as a free page (this is more
> +		 * probable than it's freezed via page_ref_freeze()), and
> +		 * we try to use @tree_page as ksm page and to free @page.
> +		 */
> +		pfn = page_to_pfn(page) ^ 1;
> +		if (pfn_valid(pfn) && page_count(pfn_to_page(pfn)) == 0) {
> +			swap(rmap_item, tree_rmap_item);
> +			swap(page, tree_page);
> +		}
> +	}
> +

A few thoughts

- if tree_page's neighbor is unused, there was no point in doing this
  swapping?

- if both *page and *tree_page have unused neighbors we could go
  further and look for an opportunity to create an order-2 page. 
  etcetera.  This may b excessive ;)

- are we really sure that this optimization causes desirable results?
  If we always merge from one tree into the other, we maximise the
  opportunities for page coalescing in the long term.  But if we
  sometimes merge one way and sometimes merge the other way, we might
  end up with less higher-order page coalescing?  Or am I confusing
  myself?




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux