loop yu.zhao On Wed, Jun 26, 2024 at 4:46 PM zhaoyang.huang <zhaoyang.huang@xxxxxxxxxx> wrote: > > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > > Madvise_cold_and_pageout could break the LRU's balance by > colding or moving the folio without taking activity information into > consideration. This commit would like to introduce the folios' gen > information based on VMA block via which the userspace could query > the VA's activity before madvise. > > eg. The VMA(56c00000-56e14000) which has big Rss/Gen value suggest that > it possesses larger proportion active folios than VMA(70dd7000-71090000) > does and is not good candidate for madvise. > > 56c00000-56e14000 rw-p 00000000 00:00 0 [anon:dalvik-/system/framework/oat/arm64/services.art] > Size: 2128 kB > KernelPageSize: 4 kB > MMUPageSize: 4 kB > Rss: 2128 kB > Pss: 2128 kB > Pss_Dirty: 2128 kB > Shared_Clean: 0 kB > Shared_Dirty: 0 kB > Private_Clean: 0 kB > Private_Dirty: 2128 kB > Referenced: 2128 kB > Anonymous: 2128 kB > KSM: 0 kB > LazyFree: 0 kB > AnonHugePages: 0 kB > ShmemPmdMapped: 0 kB > FilePmdMapped: 0 kB > Shared_Hugetlb: 0 kB > Private_Hugetlb: 0 kB > Swap: 0 kB > SwapPss: 0 kB > Locked: 0 kB > Gen: 664 > THPeligible: 0 > VmFlags: rd wr mr mw me ac > 70dd7000-71090000 rw-p 00000000 00:00 0 [anon:dalvik-/system/framework/boot.art] > Size: 2788 kB > KernelPageSize: 4 kB > MMUPageSize: 4 kB > Rss: 2788 kB > Pss: 275 kB > Pss_Dirty: 275 kB > Shared_Clean: 0 kB > Shared_Dirty: 2584 kB > Private_Clean: 0 kB > Private_Dirty: 204 kB > Referenced: 2716 kB > Anonymous: 2788 kB > KSM: 0 kB > LazyFree: 0 kB > AnonHugePages: 0 kB > ShmemPmdMapped: 0 kB > FilePmdMapped: 0 kB > Shared_Hugetlb: 0 kB > Private_Hugetlb: 0 kB > Swap: 0 kB > SwapPss: 0 kB > Locked: 0 kB > Gen: 1394 > THPeligible: 0 > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > --- > fs/proc/task_mmu.c | 15 +++++++++++++++ > 1 file changed, 15 insertions(+) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index f8d35f993fe5..9731f43aa639 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -408,12 +408,23 @@ struct mem_size_stats { > u64 pss_dirty; > u64 pss_locked; > u64 swap_pss; > +#ifdef CONFIG_LRU_GEN > + u64 gen; > +#endif > }; > > static void smaps_page_accumulate(struct mem_size_stats *mss, > struct folio *folio, unsigned long size, unsigned long pss, > bool dirty, bool locked, bool private) > { > +#ifdef CONFIG_LRU_GEN > + int gen = folio_lru_gen(folio); > + struct lru_gen_folio *lrugen = &folio_lruvec(folio)->lrugen; > + > + if (gen >= 0) > + mss->gen += (lru_gen_from_seq(lrugen->max_seq) - gen + MAX_NR_GENS) % MAX_NR_GENS; > +#endif > + > mss->pss += pss; > > if (folio_test_anon(folio)) > @@ -852,6 +863,10 @@ static void __show_smap(struct seq_file *m, const struct mem_size_stats *mss, > SEQ_PUT_DEC(" kB\nLocked: ", > mss->pss_locked >> PSS_SHIFT); > seq_puts(m, " kB\n"); > +#ifdef CONFIG_LRU_GEN > + seq_put_decimal_ull_width(m, "Gen: ", mss->gen, 8); > + seq_puts(m, "\n"); > +#endif > } > > static int show_smap(struct seq_file *m, void *v) > -- > 2.25.1 >