Re: [PATCH 0/7 v1] Speed up page cache truncation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jan Kara <jack@xxxxxxx> writes:

> when rebasing our enterprise distro to a newer kernel (from 4.4 to 4.12) we
> have noticed a regression in bonnie++ benchmark when deleting files.
> Eventually we have tracked this down to a fact that page cache truncation got
> slower by about 10%. There were both gains and losses in the above interval of
> kernels but we have been able to identify that commit 83929372f629 "filemap:
> prepare find and delete operations for huge pages" caused about 10% regression
> on its own.

It's odd that just checking if some pages are huge should be that
expensive, but ok ..
>
> Patch 1 is an easy speedup of cancel_dirty_page(). Patches 2-6 refactor page
> cache truncation code so that it is easier to batch radix tree operations.
> Patch 7 implements batching of deletes from the radix tree which more than
> makes up for the original regression.
>
> What do people think about this series?

Batching locks is always a good idea. You'll likely see far more benefits
under lock contention on larger systems.

>From a quick read it looks good to me.

Reviewed-by: Andi Kleen <ak@xxxxxxxxxxxxxxx>


-Andi



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux