Re: [RFC PATCH 07/14] migrate: Add copy_page_lists_mthread() function.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 17, 2017 at 10:05:44AM -0500, Zi Yan wrote:
> From: Zi Yan <ziy@xxxxxxxxxx>
> 
> It supports copying a list of pages via multi-threaded process.
> It evenly distributes a list of pages to a group of threads and
> uses the same subroutine as copy_page_mthread()

The new function has many duplicate lines with copy_page_mthread(),
so please consider factoring out them into a common routine.
That makes your code more readable/maintainable.

Thanks,
Naoya Horiguchi

> 
> Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
> ---
>  mm/copy_pages.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/internal.h   |  3 +++
>  2 files changed, 65 insertions(+)
> 
> diff --git a/mm/copy_pages.c b/mm/copy_pages.c
> index c357e7b01042..516c0a1a57f3 100644
> --- a/mm/copy_pages.c
> +++ b/mm/copy_pages.c
> @@ -84,3 +84,65 @@ int copy_pages_mthread(struct page *to, struct page *from, int nr_pages)
>  	kfree(work_items);
>  	return 0;
>  }
> +
> +int copy_page_lists_mthread(struct page **to, struct page **from, int nr_pages) 
> +{
> +	int err = 0;
> +	unsigned int cthreads, node = page_to_nid(*to);
> +	int i;
> +	struct copy_info *work_items;
> +	int nr_pages_per_page = hpage_nr_pages(*from);
> +	const struct cpumask *cpumask = cpumask_of_node(node);
> +	int cpu_id_list[32] = {0};
> +	int cpu;
> +
> +	cthreads = nr_copythreads;
> +	cthreads = min_t(unsigned int, cthreads, cpumask_weight(cpumask));
> +	cthreads = (cthreads / 2) * 2;
> +	cthreads = min_t(unsigned int, nr_pages, cthreads);
> +
> +	work_items = kzalloc(sizeof(struct copy_info)*nr_pages,
> +						 GFP_KERNEL);
> +	if (!work_items)
> +		return -ENOMEM;
> +
> +	i = 0;
> +	for_each_cpu(cpu, cpumask) {
> +		if (i >= cthreads)
> +			break;
> +		cpu_id_list[i] = cpu;
> +		++i;
> +	}
> +
> +	for (i = 0; i < nr_pages; ++i) {
> +		int thread_idx = i % cthreads;
> +
> +		INIT_WORK((struct work_struct *)&work_items[i], 
> +				  copythread);
> +
> +		work_items[i].to = kmap(to[i]);
> +		work_items[i].from = kmap(from[i]);
> +		work_items[i].chunk_size = PAGE_SIZE * hpage_nr_pages(from[i]);
> +
> +		BUG_ON(nr_pages_per_page != hpage_nr_pages(from[i]));
> +		BUG_ON(nr_pages_per_page != hpage_nr_pages(to[i]));
> +
> +
> +		queue_work_on(cpu_id_list[thread_idx], 
> +					  system_highpri_wq, 
> +					  (struct work_struct *)&work_items[i]);
> +	}
> +
> +	/* Wait until it finishes  */
> +	for (i = 0; i < cthreads; ++i)
> +		flush_work((struct work_struct *) &work_items[i]);
> +
> +	for (i = 0; i < nr_pages; ++i) {
> +			kunmap(to[i]);
> +			kunmap(from[i]);
> +	}
> +
> +	kfree(work_items);
> +
> +	return err;
> +}
> diff --git a/mm/internal.h b/mm/internal.h
> index ccfc2a2969f4..175e08ed524a 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -498,4 +498,7 @@ extern const struct trace_print_flags pageflag_names[];
>  extern const struct trace_print_flags vmaflag_names[];
>  extern const struct trace_print_flags gfpflag_names[];
>  
> +extern int copy_page_lists_mthread(struct page **to,
> +			struct page **from, int nr_pages);
> +
>  #endif	/* __MM_INTERNAL_H */
> -- 
> 2.11.0
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux