On Mon, Aug 13, 2018 at 05:30:58PM -0700, Mike Kravetz wrote: > The page migration code employs try_to_unmap() to try and unmap the > source page. This is accomplished by using rmap_walk to find all > vmas where the page is mapped. This search stops when page mapcount > is zero. For shared PMD huge pages, the page map count is always 1 > no matter the number of mappings. Shared mappings are tracked via > the reference count of the PMD page. Therefore, try_to_unmap stops > prematurely and does not completely unmap all mappings of the source > page. > > This problem can result is data corruption as writes to the original > source page can happen after contents of the page are copied to the > target page. Hence, data is lost. > > This problem was originally seen as DB corruption of shared global > areas after a huge page was soft offlined due to ECC memory errors. > DB developers noticed they could reproduce the issue by (hotplug) > offlining memory used to back huge pages. A simple testcase can > reproduce the problem by creating a shared PMD mapping (note that > this must be at least PUD_SIZE in size and PUD_SIZE aligned (1GB on > x86)), and using migrate_pages() to migrate process pages between > nodes while continually writing to the huge pages being migrated. > > To fix, have the try_to_unmap_one routine check for huge PMD sharing > by calling huge_pmd_unshare for hugetlbfs huge pages. If it is a > shared mapping it will be 'unshared' which removes the page table > entry and drops the reference on the PMD page. After this, flush > caches and TLB. > > Fixes: 39dde65c9940 ("shared page table for hugetlb page") > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > --- > v2: Fixed build issue for !CONFIG_HUGETLB_PAGE and typos in comment > <formletter> This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly. </formletter>