Re: [bug report] hugetlb, mempolicy: fix the mbind hugetlb migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[CC Mike and Naoya]

On Tue 09-01-18 23:05:39, Dan Carpenter wrote:
> Hello Michal Hocko,
> 
> This is a semi-automatic email about new static checker warnings.
> 
> The patch ef2fc869a863: "hugetlb, mempolicy: fix the mbind hugetlb 
> migration" from Jan 5, 2018, leads to the following Smatch complaint:
> 
>     mm/mempolicy.c:1100 new_page()
>     error: we previously assumed 'vma' could be null (see line 1092)
> 
> mm/mempolicy.c
>   1091		vma = find_vma(current->mm, start);
>   1092		while (vma) {
>                        ^^^
> There is a check for NULL here
> 
>   1093			address = page_address_in_vma(page, vma);
>   1094			if (address != -EFAULT)
>   1095				break;
>   1096			vma = vma->vm_next;
>   1097		}
>   1098	
>   1099		if (PageHuge(page)) {
>   1100			return alloc_huge_page_vma(vma, address);
>                                                    ^^^
> The patch adds a new unchecked dereference.  It might be OK?  I don't
> know.
> 
>   1101		} else if (PageTransHuge(page)) {
>   1102			struct page *thp;

Smatch is correct that the code is fishy. The patch you have outlined is
the last one to touch that area but it hasn't changed the vma logic.
It removed the BUG_ON which papepered over null VMA for your checker
previously I guess.

The THP path simply falls back to the default mem policy if vma is NULL.
We should do the same here. The patch below should do the trick.

Thanks for the report!

>From 7227218bd526cceb954a688727d78af0b5874e18 Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@xxxxxxxx>
Date: Wed, 10 Jan 2018 11:40:20 +0100
Subject: [PATCH] hugetlb, mbind: fall back to default policy if vma is NULL

Dan Carpenter has noticed that mbind migration callback (new_page)
can get a NULL vma pointer and choke on it inside alloc_huge_page_vma
which relies on the VMA to get the hstate. We used to BUG_ON this
case but the BUG_+ON has been removed recently by "hugetlb, mempolicy:
fix the mbind hugetlb migration".

The proper way to handle this is to get the hstate from the migrated
page and rely on huge_node (resp. get_vma_policy) do the right thing
with null VMA. We are currently falling back to the default mempolicy in
that case which is in line what THP path is doing here.

Reported-by: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
Signed-off-by: Michal Hocko <mhocko@xxxxxxxx>
---
 include/linux/hugetlb.h | 5 +++--
 mm/hugetlb.c            | 5 ++---
 mm/mempolicy.c          | 3 ++-
 3 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 612a29b7f6c6..36fa6a2a82e3 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -358,7 +358,8 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 struct page *alloc_huge_page_node(struct hstate *h, int nid);
 struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid,
 				nodemask_t *nmask);
-struct page *alloc_huge_page_vma(struct vm_area_struct *vma, unsigned long address);
+struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
+				unsigned long address);
 int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
 			pgoff_t idx);
 
@@ -536,7 +537,7 @@ struct hstate {};
 #define alloc_huge_page(v, a, r) NULL
 #define alloc_huge_page_node(h, nid) NULL
 #define alloc_huge_page_nodemask(h, preferred_nid, nmask) NULL
-#define alloc_huge_page_vma(vma, address) NULL
+#define alloc_huge_page_vma(h, vma, address) NULL
 #define alloc_bootmem_huge_page(h) NULL
 #define hstate_file(f) NULL
 #define hstate_sizelog(s) NULL
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ffcae114ceed..27872270ead7 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1675,16 +1675,15 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid,
 }
 
 /* mempolicy aware migration callback */
-struct page *alloc_huge_page_vma(struct vm_area_struct *vma, unsigned long address)
+struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
+		unsigned long address)
 {
 	struct mempolicy *mpol;
 	nodemask_t *nodemask;
 	struct page *page;
-	struct hstate *h;
 	gfp_t gfp_mask;
 	int node;
 
-	h = hstate_vma(vma);
 	gfp_mask = htlb_alloc_mask(h);
 	node = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
 	page = alloc_huge_page_nodemask(h, node, nodemask);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 30e68da64873..a8b7d59002e8 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1097,7 +1097,8 @@ static struct page *new_page(struct page *page, unsigned long start)
 	}
 
 	if (PageHuge(page)) {
-		return alloc_huge_page_vma(vma, address);
+		return alloc_huge_page_vma(page_hstate(compound_head(page)),
+				vma, address);
 	} else if (PageTransHuge(page)) {
 		struct page *thp;
 
-- 
2.15.1

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux