On Mon, Mar 23, 2020 at 02:52:12PM +0900, js1304@xxxxxxxxx wrote: > From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > > reclaim_stat's rotate is used for controlling the ratio of scanning page > between file and anonymous LRU. All new anonymous pages are counted > for rotate before the patch, protecting anonymous pages on active LRU, and, > it makes that reclaim on anonymous LRU is less happened than file LRU. > > Now, situation is changed. all new anonymous pages are not added > to the active LRU so rotate would be far less than before. It will cause > that reclaim on anonymous LRU happens more and it would result in bad > effect on some system that is optimized for previous setting. > > Therefore, this patch counts a new anonymous page as a reclaim_state's > rotate. Although it is non-logical to add this count to > the reclaim_state's rotate in current algorithm, reducing the regression > would be more important. > > I found this regression on kernel-build test and it is roughly 2~5% > performance degradation. With this workaround, performance is completely > restored. > > v2: fix a bug that reuses the rotate value for previous page I agree with the rationale, but the magic bit in the page->lru list pointers seems pretty ugly. I wrote a patch a few years ago that split lru_add_pvecs into an add and a putback component. This was to avoid unintentional balancing effects of LRU isolations, but I think you can benefit from that cleanup here as well. Would you mind taking a look at it and maybe take it up into your series? https://lore.kernel.org/patchwork/patch/685708/