On Tue, Jan 11, 2022 at 01:46:24PM +0530, Aneesh Kumar K.V wrote: > Yu Zhao <yuzhao@xxxxxxxxxx> writes: > > ..... > > + > > +/* > > + * Evictable pages are divided into multiple generations. The youngest and the > > + * oldest generation numbers, max_seq and min_seq, are monotonically increasing. > > + * They form a sliding window of a variable size [MIN_NR_GENS, MAX_NR_GENS]. An > > + * offset within MAX_NR_GENS, gen, indexes the lru list of the corresponding > > + * generation. The gen counter in folio->flags stores gen+1 while a page is on > > + * lrugen->lists[]. Otherwise, it stores 0. > > + * > > + * A page is added to the youngest generation on faulting. The aging needs to > > + * check the accessed bit at least twice before handing this page over to the > > + * eviction. The first check takes care of the accessed bit set on the initial > > + * fault; the second check makes sure this page hasn't been used since then. > > + * This process, AKA second chance, requires a minimum of two generations, > > + * hence MIN_NR_GENS. And to be compatible with the active/inactive lru, these > > + * two generations are mapped to the active; the rest of generations, if they > > + * exist, are mapped to the inactive. PG_active is always cleared while a page > > + * is on lrugen->lists[] so that demotion, which happens consequently when the > > + * aging creates a new generation, needs not to worry about it. > > + */ > > Where do we clear PG_active in the code? Is this the reason we endup > with We clear PG_active when we add a page (folio) to MGLRU lists: include/linux/mm_inline.h lru_gen_add_folio() do { new_flags = old_flags = READ_ONCE(folio->flags); ... new_flags &= ~(LRU_GEN_MASK | BIT(PG_active)); ^^^^^^^^^ ... } while (cmpxchg(&folio->flags, old_flags, new_flags) != old_flags); We also set it when we isolate a page (for page migration): include/linux/mm_inline.h lru_gen_del_folio() do { new_flags = old_flags = READ_ONCE(folio->flags); ... else if (lru_gen_is_active(lruvec, gen)) new_flags |= BIT(PG_active); ^^^^^^^^^ } while (cmpxchg(&folio->flags, old_flags, new_flags) != old_flags); > > void deactivate_page(struct page *page) > { > - if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { > + if (PageLRU(page) && !PageUnevictable(page) && (PageActive(page) || lru_gen_enabled())) { That's correct. > > +#define MIN_NR_GENS 2U > > +#define MAX_NR_GENS ((unsigned int)CONFIG_NR_LRU_GENS) > > + > > +struct lru_gen_struct { > > + /* the aging increments the youngest generation number */ > > + unsigned long max_seq; > > + /* the eviction increments the oldest generation numbers */ > > + unsigned long min_seq[ANON_AND_FILE]; > > + /* the birth time of each generation in jiffies */ > > + unsigned long timestamps[MAX_NR_GENS]; > > + /* the multigenerational lru lists */ > > + struct list_head lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; > > + /* the sizes of the above lists */ > > + unsigned long nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; > > + /* whether the multigenerational lru is enabled */ > > + bool enabled; > > +}; > > + > > .... > > > static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx, int nid, > > diff --git a/mm/swap.c b/mm/swap.c > > index e8c9dc6d0377..d7dde3b7d4b5 100644 > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -462,6 +462,11 @@ void folio_add_lru(struct folio *folio) > > VM_BUG_ON_FOLIO(folio_test_active(folio) && folio_test_unevictable(folio), folio); > > VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); > > > > + /* see the comment in lru_gen_add_folio() */ > > + if (lru_gen_enabled() && !folio_test_unevictable(folio) && > > + task_in_lru_fault() && !(current->flags & PF_MEMALLOC)) > > + folio_set_active(folio); > > + > > > Can you explain this better? What is the significance of marking the > folio active here. Do we need to differentiate parallel page faults (across > different vmas) w.r.t task_in_lru_fault()? All pages faulted in need to be added to the youngest generation. But without PG_active, lru_gen_add_folio() doesn't know whether a page was faulted in, or something else, e.g., page cache readahead. This is because pages aren't immediately sent to lru_gen_add_folio(). They are batched by lru_pvecs: /** * folio_add_lru - Add a folio to an LRU list. * @folio: The folio to be added to the LRU. * * Queue the folio for addition to the LRU. The decision on whether * to add the page to the [in]active [file|anon] list is deferred until the * pagevec is drained. This gives a chance for the caller of folio_add_lru() * have the folio added to the active list using folio_mark_accessed(). */ void folio_add_lru(struct folio *folio) { struct pagevec *pvec; VM_BUG_ON_FOLIO(folio_test_active(folio) && folio_test_unevictable(folio), folio); VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); /* see the comment in lru_gen_add_folio() */ if (lru_gen_enabled() && !folio_test_unevictable(folio) && lru_gen_in_pgfault() && !(current->flags & PF_MEMALLOC)) folio_set_active(folio); folio_get(folio); local_lock(&lru_pvecs.lock); pvec = this_cpu_ptr(&lru_pvecs.lru_add); if (pagevec_add_and_need_flush(pvec, &folio->page)) __pagevec_lru_add(pvec); local_unlock(&lru_pvecs.lock); }