Re: [v5 PATCH 1/2] hugetlb: arm64: add mte support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 9/17/24 3:36 AM, David Hildenbrand wrote:
On 13.09.24 20:34, Yang Shi wrote:
Enable MTE support for hugetlb.

The MTE page flags will be set on the folio only.  When copying
hugetlb folio (for example, CoW), the tags for all subpages will be copied
when copying the first subpage.

When freeing hugetlb folio, the MTE flags will be cleared.

Signed-off-by: Yang Shi <yang@xxxxxxxxxxxxxxxxxxxxxx>
---
  arch/arm64/include/asm/hugetlb.h |  8 ++++
  arch/arm64/include/asm/mman.h    |  3 +-
  arch/arm64/include/asm/mte.h     | 67 ++++++++++++++++++++++++++++++++
  arch/arm64/kernel/hibernate.c    |  6 +++
  arch/arm64/kernel/mte.c          | 27 ++++++++++++-
  arch/arm64/kvm/guest.c           | 16 ++++++--
  arch/arm64/kvm/mmu.c             | 11 ++++++
  arch/arm64/mm/copypage.c         | 27 ++++++++++++-
  fs/hugetlbfs/inode.c             |  2 +-
  9 files changed, 159 insertions(+), 8 deletions(-)

v5: * Indentation fix and renaming per Catalin.
v4: * Fixed the comment from David.
v3: * Fixed the build error when !CONFIG_ARM64_MTE.
     * Incorporated the comment from David to have hugetlb folio
       specific APIs for manipulating the page flags.
     * Don't assume the first page is the head page since huge page copy
       can start from any subpage.
v2: * Reimplemented the patch to fix the comments from Catalin.
     * Added test cases (patch #2) per Catalin.

diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 293f880865e8..c6dff3e69539 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -11,6 +11,7 @@
  #define __ASM_HUGETLB_H
    #include <asm/cacheflush.h>
+#include <asm/mte.h>
  #include <asm/page.h>
    #ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
@@ -21,6 +22,13 @@ extern bool arch_hugetlb_migration_supported(struct hstate *h);
  static inline void arch_clear_hugetlb_flags(struct folio *folio)
  {
      clear_bit(PG_dcache_clean, &folio->flags);
+
+#ifdef CONFIG_ARM64_MTE
+    if (system_supports_mte()) {
+        clear_bit(PG_mte_tagged, &folio->flags);
+        clear_bit(PG_mte_lock, &folio->flags);
+    }
+#endif
  }
  #define arch_clear_hugetlb_flags arch_clear_hugetlb_flags
  diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h
index 5966ee4a6154..304dfc499e68 100644
--- a/arch/arm64/include/asm/mman.h
+++ b/arch/arm64/include/asm/mman.h
@@ -28,7 +28,8 @@ static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)        * backed by tags-capable memory. The vm_flags may be overridden by a
       * filesystem supporting MTE (RAM-based).
       */
-    if (system_supports_mte() && (flags & MAP_ANONYMOUS))
+    if (system_supports_mte() &&
+        (flags & (MAP_ANONYMOUS | MAP_HUGETLB)))
          return VM_MTE_ALLOWED;
        return 0;
diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 0f84518632b4..03dc43636aba 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -41,6 +41,8 @@ void mte_free_tag_storage(char *storage);
    static inline void set_page_mte_tagged(struct page *page)
  {
+    VM_WARN_ON_ONCE(folio_test_hugetlb(page_folio(page)));
+
      /*
       * Ensure that the tags written prior to this function are visible
       * before the page flags update.
@@ -51,6 +53,8 @@ static inline void set_page_mte_tagged(struct page *page)
    static inline bool page_mte_tagged(struct page *page)
  {
+    VM_WARN_ON_ONCE(folio_test_hugetlb(page_folio(page)));
+
      bool ret = test_bit(PG_mte_tagged, &page->flags);
        /*
@@ -76,6 +80,8 @@ static inline bool page_mte_tagged(struct page *page)
   */
  static inline bool try_page_mte_tagging(struct page *page)
  {
+    VM_WARN_ON_ONCE(folio_test_hugetlb(page_folio(page)));
+
      if (!test_and_set_bit(PG_mte_lock, &page->flags))
          return true;
  @@ -157,6 +163,67 @@ static inline int mte_ptrace_copy_tags(struct task_struct *child,
    #endif /* CONFIG_ARM64_MTE */
  +#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_ARM64_MTE)
+static inline void folio_set_hugetlb_mte_tagged(struct folio *folio)
+{
+    VM_WARN_ON_ONCE(!folio_test_hugetlb(folio));
+
+    /*
+     * Ensure that the tags written prior to this function are visible
+     * before the folio flags update.
+     */
+    smp_wmb();
+    set_bit(PG_mte_tagged, &folio->flags);
+
+}
+
+static inline bool folio_test_hugetlb_mte_tagged(struct folio *folio)
+{
+    VM_WARN_ON_ONCE(!folio_test_hugetlb(folio));
+
+    bool ret = test_bit(PG_mte_tagged, &folio->flags);

Nit: VM_WARN_ should come after "bool ret" ...

+
+    /*
+     * If the folio is tagged, ensure ordering with a likely subsequent
+     * read of the tags.
+     */
+    if (ret)
+        smp_rmb();
+    return ret;
+}
+

Reviewed-by: David Hildenbrand <david@xxxxxxxxxx>

Thanks. Will fix the nit when I rebase the patch after the merge window.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux