On Tue, May 7, 2024 at 1:52 AM David Hildenbrand <david@xxxxxxxxxx> wrote: > > On 06.05.24 23:13, Yosry Ahmed wrote: > > A lot of intricacies go into updating the stats when adding or removing > > mappings: which stat index to use and which function. Abstract this away > > into a new static helper in rmap.c, __folio_mod_stat(). > > > > This adds an unnecessary call to folio_test_anon() in > > __folio_add_anon_rmap() and __folio_add_file_rmap(). However, the folio > > struct should already be in the cache at this point, so it shouldn't > > cause any noticeable overhead. > > Depending on the inlining, we might have more branches that could be avoided > (especially in folio_add_new_anon_rmap()). > > [the rmap code is more performance-sensitive and relevant than you might think] I thought about making the helper __always_inline. Would that be better? > > > > > No functional change intended. > > > > Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx> > > --- > > > > This applies on top of "mm: do not update memcg stats for > > NR_{FILE/SHMEM}_PMDMAPPED": > > https://lore.kernel.org/lkml/20240506192924.271999-1-yosryahmed@xxxxxxxxxx/ > > > > David, I was on the fence about adding a Suggested-by here. You did > > suggest adding a helper, but the one with the extra folio_test_anon() > > was my idea and I didn't want to blame it on you. So I'll leave this up > > to you :) > > :) fair enough! It's a clear improvement to readability. > > [...] > > > > - if (nr_pmdmapped) { > > - /* NR_{FILE/SHMEM}_PMDMAPPED are not maintained per-memcg */ > > - if (folio_test_anon(folio)) > > - __lruvec_stat_mod_folio(folio, NR_ANON_THPS, -nr_pmdmapped); > > - else > > - __mod_node_page_state(pgdat, > > - folio_test_swapbacked(folio) ? > > - NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, > > - -nr_pmdmapped); > > - } > > if (nr) { > > - idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED; > > - __lruvec_stat_mod_folio(folio, idx, -nr); > > - > > > We can now even do: > > diff --git a/mm/rmap.c b/mm/rmap.c > index 9ed995da4709..7a147195e512 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1555,18 +1555,17 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, > break; > } > > - if (nr) { > - /* > - * Queue anon large folio for deferred split if at least one > - * page of the folio is unmapped and at least one page > - * is still mapped. > - * > - * Check partially_mapped first to ensure it is a large folio. > - */ > - if (folio_test_anon(folio) && partially_mapped && > - list_empty(&folio->_deferred_list)) > - deferred_split_folio(folio); > - } > + /* > + * Queue anon large folio for deferred split if at least one > + * page of the folio is unmapped and at least one page > + * is still mapped. > + * > + * Check partially_mapped first to ensure it is a large folio. > + */ > + if (folio_test_anon(folio) && partially_mapped && > + list_empty(&folio->_deferred_list)) > + deferred_split_folio(folio); > + Dumb question: why is it okay to remove the 'if (nr)' condition here? It seems to me by looking at the code in case RMAP_LEVEL_PMD that it is possible for partially_mapped to be true while nr == 0. Is this practically impossible for some reason, or is adding the folio to the deferred split queue okay either way? > __folio_mod_stat(folio, nr, nr_pmdmapped); > > /* > > > Which will help some of my upcoming patches. > > Feel free to include that in a v2, otherwise I'll include it in an upcoming > patch series. > > > Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> Thanks!