Re: [PATCH -mm -v3] mm, swap: Sort swap entries before free

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Huang, Ying" <ying.huang@xxxxxxxxx> writes:

> Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> writes:
>
>> On Fri,  7 Apr 2017 14:49:01 +0800 "Huang, Ying" <ying.huang@xxxxxxxxx> wrote:
>>
>>> To reduce the lock contention of swap_info_struct->lock when freeing
>>> swap entry.  The freed swap entries will be collected in a per-CPU
>>> buffer firstly, and be really freed later in batch.  During the batch
>>> freeing, if the consecutive swap entries in the per-CPU buffer belongs
>>> to same swap device, the swap_info_struct->lock needs to be
>>> acquired/released only once, so that the lock contention could be
>>> reduced greatly.  But if there are multiple swap devices, it is
>>> possible that the lock may be unnecessarily released/acquired because
>>> the swap entries belong to the same swap device are non-consecutive in
>>> the per-CPU buffer.
>>> 
>>> To solve the issue, the per-CPU buffer is sorted according to the swap
>>> device before freeing the swap entries.  Test shows that the time
>>> spent by swapcache_free_entries() could be reduced after the patch.
>>> 
>>> Test the patch via measuring the run time of swap_cache_free_entries()
>>> during the exit phase of the applications use much swap space.  The
>>> results shows that the average run time of swap_cache_free_entries()
>>> reduced about 20% after applying the patch.
>>
>> "20%" is useful info, but it is much better to present the absolute
>> numbers, please.  If it's "20% of one nanosecond" then the patch isn't
>> very interesting.  If it's "20% of 35 seconds" then we know we have
>> more work to do.
>
> I added memory freeing timing capability to vm-scalability test suite.
> The result shows the memory freeing time reduced from 2.64s to 2.31s
> (about -12.5%).

The memory space to free is 96G (including swap).  The machine has 144
CPU, 32G RAM, and 96G swap.  The process number is 16.

Best Regards,
Huang, Ying

> Best Regards,
> Huang, Ying
>
>> If there is indeed still a significant problem here then perhaps it
>> would be better to move the percpu swp_entry_t buffer into the
>> per-device structure swap_info_struct, so it becomes "per cpu, per
>> device".  That way we should be able to reduce contention further.
>>
>> Or maybe we do something else - it all depends upon the significance of
>> this problem, which is why a full description of your measurements is
>> useful.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux