On Tue 10-10-17 10:25:13, Andi Kleen wrote: > Jan Kara <jack@xxxxxxx> writes: > > > when rebasing our enterprise distro to a newer kernel (from 4.4 to 4.12) we > > have noticed a regression in bonnie++ benchmark when deleting files. > > Eventually we have tracked this down to a fact that page cache truncation got > > slower by about 10%. There were both gains and losses in the above interval of > > kernels but we have been able to identify that commit 83929372f629 "filemap: > > prepare find and delete operations for huge pages" caused about 10% regression > > on its own. > > It's odd that just checking if some pages are huge should be that > expensive, but ok .. Yeah, I was surprised as well but profiles were pretty clear on this - part of the slowdown was caused by loads of page->_compound_head (PageTail() and page_compound() use that) which we previously didn't have to load at all, part was in hpage_nr_pages() function and its use. > > Patch 1 is an easy speedup of cancel_dirty_page(). Patches 2-6 refactor page > > cache truncation code so that it is easier to batch radix tree operations. > > Patch 7 implements batching of deletes from the radix tree which more than > > makes up for the original regression. > > > > What do people think about this series? > > Batching locks is always a good idea. You'll likely see far more benefits > under lock contention on larger systems. > > From a quick read it looks good to me. > > Reviewed-by: Andi Kleen <ak@xxxxxxxxxxxxxxx> Thanks for having a look! Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR