Re: [PATCH 0/7 v1] Speed up page cache truncation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jan Kara <jack@xxxxxxx> writes:

> when rebasing our enterprise distro to a newer kernel (from 4.4 to 4.12) we
> have noticed a regression in bonnie++ benchmark when deleting files.
> Eventually we have tracked this down to a fact that page cache truncation got
> slower by about 10%. There were both gains and losses in the above interval of
> kernels but we have been able to identify that commit 83929372f629 "filemap:
> prepare find and delete operations for huge pages" caused about 10% regression
> on its own.

It's odd that just checking if some pages are huge should be that
expensive, but ok ..
>
> Patch 1 is an easy speedup of cancel_dirty_page(). Patches 2-6 refactor page
> cache truncation code so that it is easier to batch radix tree operations.
> Patch 7 implements batching of deletes from the radix tree which more than
> makes up for the original regression.
>
> What do people think about this series?

Batching locks is always a good idea. You'll likely see far more benefits
under lock contention on larger systems.

>From a quick read it looks good to me.

Reviewed-by: Andi Kleen <ak@xxxxxxxxxxxxxxx>


-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux