On Mon, 17 Feb 2020 11:25:32 +0100 SeongJae Park <sjpark@xxxxxxxxxx> wrote: > From: SeongJae Park <sjpark@xxxxxxxxx> > > This commit implements DAMON's basic access check and region based > sampling mechanisms. This change would seems make no sense, mainly > because it is only a part of the DAMON's logics. Following two commits > will make more sense. > [...] > +/* > + * Check whether the given region has accessed since the last check > + * > + * mm 'mm_struct' for the given virtual address space > + * r the region to be checked > + */ > +static void kdamond_check_access(struct damon_ctx *ctx, > + struct mm_struct *mm, struct damon_region *r) > +{ > + pte_t *pte = NULL; > + pmd_t *pmd = NULL; > + spinlock_t *ptl; > + > + if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl)) > + goto mkold; > + > + /* Read the page table access bit of the page */ > + if (pte && pte_young(*pte)) > + r->nr_accesses++; > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > + else if (pmd && pmd_young(*pmd)) > + r->nr_accesses++; > +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > + > + spin_unlock(ptl); > + > +mkold: > + /* mkold next target */ > + r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end); > + > + if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl)) > + return; > + > + if (pte) { > + if (pte_young(*pte)) { > + clear_page_idle(pte_page(*pte)); > + set_page_young(pte_page(*pte)); > + } > + *pte = pte_mkold(*pte); > + } > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > + else if (pmd) { > + if (pmd_young(*pmd)) { > + clear_page_idle(pmd_page(*pmd)); > + set_page_young(pte_page(*pte)); Oops, This should be `set_page_young(pmd_page(*pmd))`. Will fix in next spin. Thanks, SeongJae Park