Re: [PATCH v3] mm/hugetlb: fix races when looking up a CONT-PTE/PMD size hugetlb page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 9/2/2022 5:06 AM, Mike Kravetz wrote:
On 09/01/22 18:41, Baolin Wang wrote:
On some architectures (like ARM64), it can support CONT-PTE/PMD size
hugetlb, which means it can support not only PMD/PUD size hugetlb
(2M and 1G), but also CONT-PTE/PMD size(64K and 32M) if a 4K page size
specified.

So when looking up a CONT-PTE size hugetlb page by follow_page(), it
will use pte_offset_map_lock() to get the pte entry lock for the CONT-PTE
size hugetlb in follow_page_pte(). However this pte entry lock is incorrect
for the CONT-PTE size hugetlb, since we should use huge_pte_lock() to
get the correct lock, which is mm->page_table_lock.

That means the pte entry of the CONT-PTE size hugetlb under current
pte lock is unstable in follow_page_pte(), we can continue to migrate
or poison the pte entry of the CONT-PTE size hugetlb, which can cause
some potential race issues, even though they are under the 'pte lock'.

For example, suppose thread A is trying to look up a CONT-PTE size
hugetlb page by move_pages() syscall under the lock, however antoher
thread B can migrate the CONT-PTE hugetlb page at the same time, which
will cause thread A to get an incorrect page, if thread A also wants to
do page migration, then data inconsistency error occurs.

Moreover we have the same issue for CONT-PMD size hugetlb in
follow_huge_pmd().

To fix above issues, rename the follow_huge_pmd() as follow_huge_pmd_pte()
to handle PMD and PTE level size hugetlb, which uses huge_pte_lock() to
get the correct pte entry lock to make the pte entry stable.

Cc: <stable@xxxxxxxxxxxxxxx>
Suggested-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
---
Changes from v2:
  - Combine PMD and PTE level hugetlb handling into one function.
  - Drop unnecessary patches.
  - Update the commit message.

Baolin, were you able to at least exercise the new code paths?  Especially the
path for CONT_PTE.  Code looks fine to me.

Yes, I've tested CONT-PTE, CONT-PMD and PMD size hugetlb with move_pages() syscall, all works well and the lock is expected.


Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>

It is a little hackish, but this is only for backports.  So, I think it is OK.
We may want to point out that code cleanup and simplification is going upstream
that will address these issues in a more elegant manner.


Mike, please fold this patch into your series. Thanks.

If I understand Andrew, this can go in as a separate patch for backport to
address potential bugs.  I will provide a cleanup/simplification that will
remove this going forward.

Andrew also asked for a Fixes tag.
Support for CONT_PMD/_PTE was added with bb9dd3df8ee9 "arm64: hugetlb: refactor
find_num_contig()".  Patch series "Support for contiguous pte hugepages", v4.
However, I do not believe these code paths were executed until migration
support was added with 5480280d3f2d "arm64/mm: enable HugeTLB migration for
contiguous bit HugeTLB pages"
I would go with 5480280d3f2d.

Make sense. And I saw Andrew has helped to add a Fixes tag with your suggestion. Thanks Mike and Andrew.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux