On Thu, 15 Jul 2021 20:59:59 +0100, Matthew Wilcox wrote > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 989d8f78c256..c7a4c0a5863d 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -352,7 +352,8 @@ extern void lru_note_cost(struct lruvec *lruvec, bool file, > unsigned int nr_pages); > extern void lru_note_cost_page(struct page *); > extern void lru_cache_add(struct page *); > -extern void mark_page_accessed(struct page *); > +void mark_page_accessed(struct page *); > +void folio_mark_accessed(struct folio *); > > extern atomic_t lru_disable_count; > > diff --git a/mm/folio-compat.c b/mm/folio-compat.c > index 7044fcc8a8aa..a374747ae1c6 100644 > --- a/mm/folio-compat.c > +++ b/mm/folio-compat.c > @@ -5,6 +5,7 @@ > */ > > #include <linux/pagemap.h> > +#include <linux/swap.h> > > struct address_space *page_mapping(struct page *page) > { > @@ -41,3 +42,9 @@ bool page_mapped(struct page *page) > return folio_mapped(page_folio(page)); > } > EXPORT_SYMBOL(page_mapped); > + > +void mark_page_accessed(struct page *page) > +{ > + folio_mark_accessed(page_folio(page)); > +} > +EXPORT_SYMBOL(mark_page_accessed); ... snip ... > > @@ -430,36 +430,34 @@ static void __lru_cache_activate_page(struct page *page) > * When a newly allocated page is not yet visible, so safe for non-atomic ops, > * __SetPageReferenced(page) may be substituted for mark_page_accessed(page). > */ > -void mark_page_accessed(struct page *page) > +void folio_mark_accessed(struct folio *folio) > { > - page = compound_head(page); > - > - if (!PageReferenced(page)) { > - SetPageReferenced(page); > - } else if (PageUnevictable(page)) { > + if (!folio_test_referenced(folio)) { > + folio_set_referenced(folio); > + } else if (folio_test_unevictable(folio)) { Hi Matthew, My colleagues and I have been testing some page-tracking algorithms and we tried using the reference bit by using /proc/pid/clear_clears, /proc/pid/pagemap, and /proc/kpageflags. I'm not 100% of the issue, but we're finding some inconsistencies when tracking page reference bits, and this seems to be at least one of the patches we saw that might be in the mix. >From reading up on folios, it seems like this change prevents each individual page in a folio from being marked referenced, and instead prefers to simply mark the entire folio containing the page as accessed. When looking at the proc/ interface, it seems like it is still referencing the page and not using the folio for a page (see below). Our current suspicion is that since only the folio head gets marked referenced, any pages inside the folio that aren't the head will basically never be marked referenced, leading to an inconsistent view. I'm curious your thoughts on whether (or neither): a) Should we update kpageflags_read to use page_folio() and get the folio flags for the head page or b) the above change to mark_page_accessed() should mark both the individual page flags to accessed as well as the folio head accessed. Thanks for your time. Gregory Price (relevant fs/proc/page.c code:) static ssize_t kpageflags_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { ... snip ... ppage = pfn_to_online_page(pfn); if (put_user(stable_page_flags(ppage), out)) { ret = -EFAULT; break; } ... snip ... } u64 stable_page_flags(struct page *page) { ... snip ... k = page->flags; ... snip ... }