[PATCH v2] mm/mempolicy,hugetlb: Checking hstate for hugetlbfs page in vma_migratable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Checking hstate at early phase when isolating page, instead of during
unmap and move phase, to avoid useless isolation.

Signed-off-by: Li Xinhai <lixinhai.lxh@xxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
---
v1->v2:
New function
bool vm_hugepage_migration_supported(struct vm_area_struct *vma)
is introduced to simplify the inter dependency of 
include/linux/mempolicy.h and include/linux/hugetlb.h, and could be
useful for other caller.

 include/linux/hugetlb.h   |  2 ++
 include/linux/mempolicy.h |  6 +++---
 mm/hugetlb.c              | 10 ++++++++++
 3 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 31d4920..52fc034 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -834,6 +834,8 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
 }
 #endif	/* CONFIG_HUGETLB_PAGE */
 
+extern bool vm_hugepage_migration_supported(struct vm_area_struct *vma);
+
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
 					struct mm_struct *mm, pte_t *pte)
 {
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 5228c62..6637166 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -172,6 +172,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
 
 extern void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol);
 
+extern bool vm_hugepage_migration_supported(struct vm_area_struct *vma);
 /* Check if a vma is migratable */
 static inline bool vma_migratable(struct vm_area_struct *vma)
 {
@@ -185,10 +186,9 @@ static inline bool vma_migratable(struct vm_area_struct *vma)
 	if (vma_is_dax(vma))
 		return false;
 
-#ifndef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
-	if (vma->vm_flags & VM_HUGETLB)
+	if (is_vm_hugetlb_page(vma) &&
+		!vm_hugepage_migration_supported(vma))
 		return false;
-#endif
 
 	/*
 	 * Migration allocates pages in the highest zone. If we cannot
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index dd8737a..fce149c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1316,6 +1316,16 @@ int PageHeadHuge(struct page *page_head)
 	return get_compound_page_dtor(page_head) == free_huge_page;
 }
 
+bool vm_hugepage_migration_supported(struct vm_area_struct *vma)
+{
+#ifdef CONFIG_HUGETLB_PAGE
+	VM_BUG_ON(!is_vm_hugetlb_page(vma));
+	if (hugepage_migration_supported(hstate_vma(vma)))
+		return true;
+#endif
+	return false;
+}
+
 pgoff_t __basepage_index(struct page *page)
 {
 	struct page *page_head = compound_head(page);
-- 
1.8.3.1





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux