Re: [PATCH v25 05/13] mm/damon: Implement primitives for the virtual memory address spaces

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: SeongJae Park <sjpark@xxxxxxxxx>

On Thu, 18 Mar 2021 10:08:48 +0000 sj38.park@xxxxxxxxx wrote:

> From: SeongJae Park <sjpark@xxxxxxxxx>
> 
> This commit introduces a reference implementation of the address space
> specific low level primitives for the virtual address space, so that
> users of DAMON can easily monitor the data accesses on virtual address
> spaces of specific processes by simply configuring the implementation to
> be used by DAMON.
> 
> The low level primitives for the fundamental access monitoring are
> defined in two parts:
> 
> 1. Identification of the monitoring target address range for the address
>    space.
> 2. Access check of specific address range in the target space.
> 
> The reference implementation for the virtual address space does the
> works as below.
> 
> PTE Accessed-bit Based Access Check
> -----------------------------------
> 
> The implementation uses PTE Accessed-bit for basic access checks.  That
> is, it clears the bit for the next sampling target page and checks
> whether it is set again after one sampling period.  This could disturb
> the reclaim logic.  DAMON uses ``PG_idle`` and ``PG_young`` page flags
> to solve the conflict, as Idle page tracking does.
> 
> VMA-based Target Address Range Construction
> -------------------------------------------
> 
> Only small parts in the super-huge virtual address space of the
> processes are mapped to physical memory and accessed.  Thus, tracking
> the unmapped address regions is just wasteful.  However, because DAMON
> can deal with some level of noise using the adaptive regions adjustment
> mechanism, tracking every mapping is not strictly required but could
> even incur a high overhead in some cases.  That said, too huge unmapped
> areas inside the monitoring target should be removed to not take the
> time for the adaptive mechanism.
> 
> For the reason, this implementation converts the complex mappings to
> three distinct regions that cover every mapped area of the address
> space.  Also, the two gaps between the three regions are the two biggest
> unmapped areas in the given address space.  The two biggest unmapped
> areas would be the gap between the heap and the uppermost mmap()-ed
> region, and the gap between the lowermost mmap()-ed region and the stack
> in most of the cases.  Because these gaps are exceptionally huge in
> usual address spaces, excluding these will be sufficient to make a
> reasonable trade-off.  Below shows this in detail::
> 
>     <heap>
>     <BIG UNMAPPED REGION 1>
>     <uppermost mmap()-ed region>
>     (small mmap()-ed regions and munmap()-ed regions)
>     <lowermost mmap()-ed region>
>     <BIG UNMAPPED REGION 2>
>     <stack>
> 
> Signed-off-by: SeongJae Park <sjpark@xxxxxxxxx>
> Reviewed-by: Leonard Foerster <foersleo@xxxxxxxxx>
> ---
>  include/linux/damon.h |  13 +
>  mm/damon/Kconfig      |   9 +
>  mm/damon/Makefile     |   1 +
>  mm/damon/vaddr.c      | 579 ++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 602 insertions(+)
>  create mode 100644 mm/damon/vaddr.c
> 
[...]
> +
> +/*
> + * Update regions for current memory mappings
> + */
> +void damon_va_update(struct damon_ctx *ctx)
> +{
> +	struct damon_addr_range three_regions[3];
> +	struct damon_target *t;
> +
> +	damon_for_each_target(t, ctx) {
> +		if (damon_va_three_regions(t, three_regions))
> +			continue;
> +		damon_va_apply_three_regions(ctx, t, three_regions);
> +	}
> +}
> +
> +static void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm,
> +			     unsigned long addr)
> +{
> +	bool referenced = false;
> +	struct page *page = pte_page(*pte);

The 'pte' could be a special mapping which has no associated 'struct page'.  In
the case, 'page' would be invalid.  Guoju from Alibaba found the problem from
his GPU setup and reported the problem to via Github[1].  I made a fix and
waiting for his test results.  I will squash the fix in the next version of
this patch.

[1] https://github.com/sjp38/linux/pull/3/commits/12eeebc6ffc8b5d2a6aba7a2ec9fb85d3c1663af
[2] https://github.com/sjp38/linux/commit/f1fa22b6375ceb9ae53e9370452de0d62efd4df5


Thanks,
SeongJae Park

> +
> +	if (pte_young(*pte)) {
> +		referenced = true;
> +		*pte = pte_mkold(*pte);
> +	}
> +
> +#ifdef CONFIG_MMU_NOTIFIER
> +	if (mmu_notifier_clear_young(mm, addr, addr + PAGE_SIZE))
> +		referenced = true;
> +#endif /* CONFIG_MMU_NOTIFIER */
> +
> +	if (referenced)
> +		set_page_young(page);
> +
> +	set_page_idle(page);
> +}
> +
[...]
> +
> +static void damon_va_mkold(struct mm_struct *mm, unsigned long addr)
> +{
> +	pte_t *pte = NULL;
> +	pmd_t *pmd = NULL;
> +	spinlock_t *ptl;
> +
> +	if (follow_invalidate_pte(mm, addr, NULL, &pte, &pmd, &ptl))
> +		return;
> +
> +	if (pte) {
> +		damon_ptep_mkold(pte, mm, addr);
> +		pte_unmap_unlock(pte, ptl);
> +	} else {
> +		damon_pmdp_mkold(pmd, mm, addr);
> +		spin_unlock(ptl);
> +	}
> +}
> +
[...]




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux