On Thu, Oct 18, 2018 at 10:28:12AM -0700, Mike Kravetz wrote: > On 10/10/18 11:04 PM, gregkh@xxxxxxxxxxxxxxxxxxx wrote: > > > > The patch below does not apply to the 4.9-stable tree. > > If someone wants it applied there, or to any other stable or longterm > > tree, then please email the backport, including the original git commit > > id to <stable@xxxxxxxxxxxxxxx>. > > From: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > > mm: migration: fix migration of huge PMD shared pages > > commit 017b1660df89f5fb4bfe66c34e35f7d2031100c7 upstream > > The page migration code employs try_to_unmap() to try and unmap the > source page. This is accomplished by using rmap_walk to find all > vmas where the page is mapped. This search stops when page mapcount > is zero. For shared PMD huge pages, the page map count is always 1 > no matter the number of mappings. Shared mappings are tracked via > the reference count of the PMD page. Therefore, try_to_unmap stops > prematurely and does not completely unmap all mappings of the source > page. > > This problem can result is data corruption as writes to the original > source page can happen after contents of the page are copied to the > target page. Hence, data is lost. > > This problem was originally seen as DB corruption of shared global > areas after a huge page was soft offlined due to ECC memory errors. > DB developers noticed they could reproduce the issue by (hotplug) > offlining memory used to back huge pages. A simple testcase can > reproduce the problem by creating a shared PMD mapping (note that > this must be at least PUD_SIZE in size and PUD_SIZE aligned (1GB on > x86)), and using migrate_pages() to migrate process pages between > nodes while continually writing to the huge pages being migrated. > > To fix, have the try_to_unmap_one routine check for huge PMD sharing > by calling huge_pmd_unshare for hugetlbfs huge pages. If it is a > shared mapping it will be 'unshared' which removes the page table > entry and drops the reference on the PMD page. After this, flush > caches and TLB. > > mmu notifiers are called before locking page tables, but we can not > be sure of PMD sharing until page tables are locked. Therefore, > check for the possibility of PMD sharing before locking so that > notifiers can prepare for the worst possible case. The mmu notifier > calls in this commit are different than upstream. That is because > upstream went to a different model here. Instead of moving to the > new model, we leave existing model unchanged and only use the > mmu_*range* calls in this special case. > > Fixes: 39dde65c9940 ("shared page table for hugetlb page") > Cc: stable@xxxxxxxxxxxxxxx > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > --- > include/linux/hugetlb.h | 14 +++++++++++ > include/linux/mm.h | 6 +++++ > mm/hugetlb.c | 37 +++++++++++++++++++++++++-- > mm/rmap.c | 56 +++++++++++++++++++++++++++++++++++++++++ > 4 files changed, 111 insertions(+), 2 deletions(-) Now queued up, thanks. greg k-h