Re: [PATCH v2] mm: Optimized hugepage zeroing & copying from user

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Huang, Ying" <ying.huang@xxxxxxxxx> writes:

> Prathu Baronia <prathu.baronia@xxxxxxxxxxx> writes:
>
>> In !HIGHMEM cases, specially in 64-bit architectures, we don't need temp mapping
>> of pages. Hence, k(map|unmap)_atomic() acts as nothing more than multiple
>> barrier() calls, for example for a 2MB hugepage in clear_huge_page() these are
>> called 512 times i.e. to map and unmap each subpage that means in total 2048
>> barrier calls. This called for optimization. Simply getting VADDR from page does
>> the job for us. This also applies to the copy_user_huge_page() function.
>>
>> With kmap_atomic() out of the picture we can use memset and memcpy for sizes
>> larger than 4K. Instead of a left-right approach to access the target subpage,
>> getting the VADDR from the page and using memset directly in a simple experiment
>> we observed a 64% improvement in time over the current approach.
>>
>> With this(v2) patch we observe 65.85%(under controlled conditions) improvement
>> over the current approach. 
>
> Can you describe your test?
>
>> Currently process_huge_page iterates over subpages in a left-right manner
>> targeting the subpage that was accessed to be processed at last to keep the
>> cache hot around the faulting address. This caused a latency issue because as we
>> observed in the case of ARM64 the reverse access is much slower than forward
>> access and much much slower than oneshot access because of the pre-fetcher
>> behaviour. The following simple userspace experiment to allocate
>> 100MB(total_size) of pages and writing to it using memset(oneshot), forward
>> order loop and a reverse order loop gave us a good insight:-
>>
>> --------------------------------------------------------------------------------
>> Test code snippet:
>> --------------------------------------------------------------------------------
>>   /* One shot memset */
>>   memset (r, 0xd, total_size);
>>
>>   /* traverse in forward order */
>>   for (j = 0; j < total_pages; j++)
>>     {
>>       memset (q + (j * SZ_4K), 0xc, SZ_4K);
>>     }
>>
>>   /* traverse in reverse order */
>>   for (i = 0; i < total_pages; i++)
>>     {
>>       memset (p + total_size - (i + 1) * SZ_4K, 0xb, SZ_4K);
>>     }
>
> You have tested the chunk sizes 4KB and 2MB, can you test some values in
> between?  For example 32KB or 64KB?  Maybe there's a sweet point with
> some smaller granularity and good performance.

And if you test in user space, please make sure you copied memset
implementation in kernel.  Because libc memset implementation may be
quite different, for example, it may uses AVX instructions on x86, while
memset in kernel doesn't use them.

Best Regards,
Huang, Ying




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux