On Fri, Feb 28, 2020 at 01:03:03PM +0900, Joonsoo Kim wrote: > Hello, > > On Fri, Feb 28, 2020 at 11:23:58AM +0800, Aaron Lu wrote: > > On Thu, Feb 27, 2020 at 08:48:06AM -0500, Johannes Weiner wrote: > > > On Wed, Feb 26, 2020 at 07:39:42PM -0800, Andrew Morton wrote: > > > > It sounds like the above simple aging changes provide most of the > > > > improvement, and that the workingset changes are less beneficial and a > > > > bit more risky/speculative? > > > > > > > > If so, would it be best for us to concentrate on the aging changes > > > > first, let that settle in and spread out and then turn attention to the > > > > workingset changes? > > > > > > Those two patches work well for some workloads (like the benchmark), > > > but not for others. The full patchset makes sure both types work well. > > > > > > Specifically, the existing aging strategy for anon assumes that most > > > anon pages allocated are hot. That's why they all start active and we > > > then do second-chance with the small inactive LRU to filter out the > > > few cold ones to swap out. This is true for many common workloads. > > > > > > The benchmark creates a larger-than-memory set of anon pages with a > > > flat access profile - to the VM a flood of one-off pages. Joonsoo's > > > > test: swap-w-rand-mt, which is a multi thread swap write intensive > > workload so there will be swap out and swap ins. > > > > > first two patches allow the VM to usher those pages in and out of > > > > Weird part is, the robot says the performance gain comes from the 1st > > patch only, which adjust the ratio, not including the 2nd patch which > > makes anon page starting from inactive list. > > > > I find the performance gain hard to explain... > > Let me explain the reason of the performance gain. > > 1st patch provides more second chance to the anonymous pages. By second chance, do I understand correctely this refers to pages on inactive list get moved back to active list? > In swap-w-rand-mt test, memory used by all threads is greater than the > amount of the system memory, but, memory used by each thread would > not be much. So, although it is a rand test, there is a locality > in each thread's job. More second chance helps to exploit this > locality so performance could be improved. Does this mean there should be fewer vmstat.pswpout and vmstat.pswpin with patch1 compared to vanilla? Thanks.