On Thu 23-08-18 13:59:16, Mike Kravetz wrote: > The page migration code employs try_to_unmap() to try and unmap the > source page. This is accomplished by using rmap_walk to find all > vmas where the page is mapped. This search stops when page mapcount > is zero. For shared PMD huge pages, the page map count is always 1 > no matter the number of mappings. Shared mappings are tracked via > the reference count of the PMD page. Therefore, try_to_unmap stops > prematurely and does not completely unmap all mappings of the source > page. > > This problem can result is data corruption as writes to the original > source page can happen after contents of the page are copied to the > target page. Hence, data is lost. > > This problem was originally seen as DB corruption of shared global > areas after a huge page was soft offlined due to ECC memory errors. > DB developers noticed they could reproduce the issue by (hotplug) > offlining memory used to back huge pages. A simple testcase can > reproduce the problem by creating a shared PMD mapping (note that > this must be at least PUD_SIZE in size and PUD_SIZE aligned (1GB on > x86)), and using migrate_pages() to migrate process pages between > nodes while continually writing to the huge pages being migrated. > > To fix, have the try_to_unmap_one routine check for huge PMD sharing > by calling huge_pmd_unshare for hugetlbfs huge pages. If it is a > shared mapping it will be 'unshared' which removes the page table > entry and drops the reference on the PMD page. After this, flush > caches and TLB. > > mmu notifiers are called before locking page tables, but we can not > be sure of PMD sharing until page tables are locked. Therefore, > check for the possibility of PMD sharing before locking so that > notifiers can prepare for the worst possible case. > > Fixes: 39dde65c9940 ("shared page table for hugetlb page") > Cc: stable@xxxxxxxxxxxxxxx > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> One nit below. [...] > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 3103099f64fd..a73c5728e961 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -4548,6 +4548,9 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma, > return saddr; > } > > +#define _range_in_vma(vma, start, end) \ > + ((vma)->vm_start <= (start) && (end) <= (vma)->vm_end) > + static inline please. Macros and potential side effects on given arguments are just not worth the risk. I also think this is something for more general use. We have that pattern at many places. So I would stick that to linux/mm.h Thanks! -- Michal Hocko SUSE Labs