On Tue, Apr 26, 2022 at 5:42 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > On Tue, 26 Apr 2022 16:39:07 -0600 Yu Zhao <yuzhao@xxxxxxxxxx> wrote: > > > On Mon, Apr 11, 2022 at 8:16 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > > > > > On Wed, 6 Apr 2022 21:15:17 -0600 Yu Zhao <yuzhao@xxxxxxxxxx> wrote: > > > > > > > Evictable pages are divided into multiple generations for each lruvec. > > > > The youngest generation number is stored in lrugen->max_seq for both > > > > anon and file types as they are aged on an equal footing. The oldest > > > > generation numbers are stored in lrugen->min_seq[] separately for anon > > > > and file types as clean file pages can be evicted regardless of swap > > > > constraints. These three variables are monotonically increasing. > > > > > > > > ... > > > > > > > > +static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) > > > > > > There's a lot of function inlining here. Fortunately the compiler will > > > ignore it all, because some of it looks wrong. Please review (and > > > remeasure!). If inlining is reqlly justified, use __always_inline, and > > > document the reasons for doing so. > > > > I totally expect modern compilers to make better decisions than I do. > > And personally, I'd never use __always_inline; instead, I'd strongly > > recommend FDO/LTO. > > My (badly expressed) point is that there's a lot of inlining of large > functions here. > > For example, lru_gen_add_folio() is huge and has 4(?) call sites. This > may well produce slower code due to the icache footprint. > > Experiment: moving lru_gen_del_folio() into mm/vmscan.c shrinks that > file's .text from 80612 bytes to 78956. > > I tend to think that out-of-line regular old C functions should be the > default and that the code should be inlined only when a clear benefit > is demonstrable, or has at least been seriously thought about. I can move those functions to vmscan.c if you think it would improve performance. I don't have a strong opinion here -- I was able to measure the bloat but not the performance impact. > > > > --- a/mm/Kconfig > > > > +++ b/mm/Kconfig > > > > @@ -909,6 +909,14 @@ config ANON_VMA_NAME > > > > area from being merged with adjacent virtual memory areas due to the > > > > difference in their name. > > > > > > > > +config LRU_GEN > > > > + bool "Multi-Gen LRU" > > > > + depends on MMU > > > > + # the following options can use up the spare bits in page flags > > > > + depends on !MAXSMP && (64BIT || !SPARSEMEM || SPARSEMEM_VMEMMAP) > > > > + help > > > > + A high performance LRU implementation to overcommit memory. > > > > + > > > > source "mm/damon/Kconfig" > > > > > > This is a problem. I had to jump through hoops just to be able to > > > compile-test this. Turns out I had to figure out how to disable > > > MAXSMP. > > > > > > Can we please figure out a way to ensure that more testers are at least > > > compile testing this? Allnoconfig, defconfig, allyesconfig, allmodconfig. > > > > > > Also, I suggest that we actually make MGLRU the default while in linux-next. > > > > The !MAXSMP is to work around [1], which I haven't had the time to > > fix. That BUILD_BUG_ON() shouldn't assert sizeof(struct page) == 64 > > since the true size depends on WANT_PAGE_VIRTUAL as well as > > LAST_CPUPID_NOT_IN_PAGE_FLAGS. My plan is here [2]. > > > > [1] https://lore.kernel.org/r/20190905154603.10349-4-aneesh.kumar@xxxxxxxxxxxxx/ > > [2] https://lore.kernel.org/r/Ygl1Gf+ATBuI%2Fm2q@xxxxxxxxxx/ > > OK, thanks. This is fairly urgent for -next and -rc inclusion. If > practically nobody is compiling the feature then practically nobody is > testing it. Let's come up with a way to improves the expected coverage > by a lot. Let me just remove !MAXSMP, since I wasn't able to reproduce this build error [1] anymore. [1] https://lore.kernel.org/r/1792f0b2e29.d72f70c9807100.8179330337708563324@xxxxxxxxxx/