Re: [PATCH] mm/damon: Validate if the pmd entry is present before accessing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Aug 17, 2022, at 14:21, Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> wrote:
> 
> The pmd_huge() is used to validate if the pmd entry is mapped by a huge
> page, also including the case of non-present (migration or hwpoisoned)
> pmd entry on arm64 or x86 architectures. Thus we should validate if it
> is present before making the pmd entry old or getting young state,
> otherwise we can not get the correct corresponding page.
> 
> Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
> ---
> mm/damon/vaddr.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
> 
> diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
> index 3c7b9d6..1d16c6c 100644
> --- a/mm/damon/vaddr.c
> +++ b/mm/damon/vaddr.c
> @@ -304,6 +304,11 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
> 
> 	if (pmd_huge(*pmd)) {
> 		ptl = pmd_lock(walk->mm, pmd);
> +		if (!pmd_present(*pmd)) {

Unluckily, we should use pte_present here. See commit c9d398fa23788. We can use
huge_ptep_get() to get a hugetlb pte, so it’s better to put the check after
pmd_huge.

Cc Mike to make sure I am not missing something.

Muchun,
Thanks.

> +			spin_unlock(ptl);
> +			return 0;
> +		}
> +
> 		if (pmd_huge(*pmd)) {
> 			damon_pmdp_mkold(pmd, walk->mm, addr);
> 			spin_unlock(ptl);
> @@ -431,6 +436,11 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> 	if (pmd_huge(*pmd)) {
> 		ptl = pmd_lock(walk->mm, pmd);
> +		if (!pmd_present(*pmd)) {
> +			spin_unlock(ptl);
> +			return 0;
> +		}
> +
> 		if (!pmd_huge(*pmd)) {
> 			spin_unlock(ptl);
> 			goto regular_page;
> -- 
> 1.8.3.1
> 
> 






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux