On Tue, Feb 08, 2022 at 01:18:57AM -0700, Yu Zhao wrote: > To avoid confusions, the term "iteration" specifically means the > traversal of an entire mm_struct list; the term "walk" will be applied > to page tables and the rmap, as usual. > > To further exploit spatial locality, the aging prefers to walk page > tables to search for young PTEs and promote hot pages. A runtime > switch will be added in the next patch to enable or disable this > feature. Without it, the aging relies on the rmap only. Clarified that page table scanning is optional as requested here: https://lore.kernel.org/linux-mm/YdxEqFPLDf+wI0xX@xxxxxxxxxxxxxx/ > NB: this feature has nothing similar with the page table scanning in > the 2.4 kernel [1], which searches page tables for old PTEs, adds cold > pages to swapcache and unmap them. > > An mm_struct list is maintained for each memcg, and an mm_struct > follows its owner task to the new memcg when this task is migrated. > Given an lruvec, the aging iterates lruvec_memcg()->mm_list and calls > walk_page_range() with each mm_struct on this list to promote hot > pages before it increments max_seq. > > When multiple page table walkers (threads) iterate the same list, each > of them gets a unique mm_struct; therefore they can run concurrently. > Page table walkers ignore any misplaced pages, e.g., if an mm_struct > was migrated, pages it left in the previous memcg won't be promoted > when its current memcg is under reclaim. Similarly, page table walkers > won't promote pages from nodes other than the one under reclaim. Clarified the interaction between task migration and reclaim as requested here: https://lore.kernel.org/linux-mm/YdxPEdsfl771Z7IX@xxxxxxxxxxxxxx/ <snipped> > Server benchmark results: > Single workload: > fio (buffered I/O): no change > > Single workload: > memcached (anon): +[5.5, 7.5]% > Ops/sec KB/sec > patch1-6: 1015292.83 39490.38 > patch1-7: 1080856.82 42040.53 > > Configurations: > no change > > Client benchmark results: > kswapd profiles: > patch1-6 > 45.49% lzo1x_1_do_compress (real work) > 7.38% page_vma_mapped_walk > 7.24% _raw_spin_unlock_irq > 2.64% ptep_clear_flush > 2.31% __zram_bvec_write > 2.13% do_raw_spin_lock > 2.09% lru_gen_look_around > 1.89% free_unref_page_list > 1.85% memmove > 1.74% obj_malloc > > patch1-7 > 47.73% lzo1x_1_do_compress (real work) > 6.84% page_vma_mapped_walk > 6.14% _raw_spin_unlock_irq > 2.86% walk_pte_range > 2.79% ptep_clear_flush > 2.24% __zram_bvec_write > 2.10% do_raw_spin_lock > 1.94% free_unref_page_list > 1.80% memmove > 1.75% obj_malloc > > Configurations: > no change Added benchmark results to show the difference between page table scanning and no page table scanning, as requested here: https://lore.kernel.org/linux-mm/Ye6xS6xUD1SORdHJ@xxxxxxxxxxxxxx/ <snipped> > +static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_mm_walk *walk) > +{ > + static const struct mm_walk_ops mm_walk_ops = { > + .test_walk = should_skip_vma, > + .p4d_entry = walk_pud_range, > + }; > + > + int err; > + struct mem_cgroup *memcg = lruvec_memcg(lruvec); > + > + walk->next_addr = FIRST_USER_ADDRESS; > + > + do { > + err = -EBUSY; > + > + /* folio_update_gen() requires stable folio_memcg() */ > + if (!mem_cgroup_trylock_pages(memcg)) > + break; Added a comment on the stable folio_memcg() requirement as requested here: https://lore.kernel.org/linux-mm/Yd6q0QdLVTS53vu4@xxxxxxxxxxxxxx/ <snipped> > +static struct lru_gen_mm_walk *alloc_mm_walk(void) > +{ > + if (current->reclaim_state && current->reclaim_state->mm_walk) > + return current->reclaim_state->mm_walk; > + > + return kzalloc(sizeof(struct lru_gen_mm_walk), > + __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN); > +} Replaced kvzalloc() with kzalloc() as requested here: https://lore.kernel.org/linux-mm/Yd6tafG3CS7BoRYn@xxxxxxxxxxxxxx/ Replaced GFP_KERNEL with __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN as requested here: https://lore.kernel.org/linux-mm/YefddYm8FAfJalNa@xxxxxxxxxxxxxx/ <snipped>