Re: dax pmd fault handler never returns to userspace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2015-11-18 at 11:23 -0700, Ross Zwisler wrote:
> On Wed, Nov 18, 2015 at 10:10:45AM -0800, Dan Williams wrote:
> > On Wed, Nov 18, 2015 at 9:43 AM, Jeff Moyer <jmoyer@xxxxxxxxxx> wrote:
> > > Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx> writes:
> > > 
> > > > On Wed, Nov 18, 2015 at 08:52:59AM -0800, Dan Williams wrote:
> > > > > Sysrq-t or sysrq-w dump?  Also do you have the locking fix from Yigal?
> > > > > 
> > > > > https://lists.01.org/pipermail/linux-nvdimm/2015-November/002842.html
> > > > 
> > > > I was able to reproduce the issue in my setup with v4.3, and the patch
> > > > from
> > > > Yigal seems to solve it.  Jeff, can you confirm?
> > > 
> > > I applied the patch from Yigal and the symptoms persist.  Ross, what are
> > > you testing on?  I'm using an NVDIMM-N.
> > > 
> > > Dan, here's sysrq-l (which is what w used to look like, I think).  Only
> > > cpu 3 is interesting:
> > > 
> > > [  825.339264] NMI backtrace for cpu 3
> > > [  825.356347] CPU: 3 PID: 13555 Comm: blk_non_zero.st Not tainted 4.4.0
> > > -rc1+ #17
> > > [  825.392056] Hardware name: HP ProLiant DL380 Gen9, BIOS P89 06/09/2015
> > > [  825.424472] task: ffff880465bf6a40 ti: ffff88046133c000 task.ti:
> > > ffff88046133c000
> > > [  825.461480] RIP: 0010:[<ffffffff81329856>]  [<ffffffff81329856>]
> > > strcmp+0x6/0x30
> > > [  825.497916] RSP: 0000:ffff88046133fbc8  EFLAGS: 00000246
> > > [  825.524836] RAX: 0000000000000000 RBX: ffff880c7fffd7c0 RCX:
> > > 000000076c800000
> > > [  825.566847] RDX: 000000076c800fff RSI: ffffffff818ea1c8 RDI:
> > > ffffffff818ea1c8
> > > [  825.605265] RBP: ffff88046133fbc8 R08: 0000000000000001 R09:
> > > ffff8804652300c0
> > > [  825.643628] R10: 00007f1b4fe0b000 R11: ffff880465230228 R12:
> > > ffffffff818ea1bd
> > > [  825.681381] R13: 0000000000000001 R14: ffff88046133fc20 R15:
> > > 0000000080000200
> > > [  825.718607] FS:  00007f1b5102d880(0000) GS:ffff88046f8c0000(0000)
> > > knlGS:00000000000000
> > > 00
> > > [  825.761663] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > [  825.792213] CR2: 00007f1b4fe0b000 CR3: 000000046b225000 CR4:
> > > 00000000001406e0
> > > [  825.830906] Stack:
> > > [  825.841235]  ffff88046133fc10 ffffffff81084610 000000076c800000
> > > 000000076c800fff
> > > [  825.879533]  000000076c800fff 00000000ffffffff ffff88046133fc90
> > > ffffffff8106d1d0
> > > [  825.916774]  000000000000000c ffff88046133fc80 ffffffff81084f0d
> > > 000000076c800000
> > > [  825.953220] Call Trace:
> > > [  825.965386]  [<ffffffff81084610>] find_next_iomem_res+0xd0/0x130
> > > [  825.996804]  [<ffffffff8106d1d0>] ? pat_enabled+0x20/0x20
> > > [  826.024773]  [<ffffffff81084f0d>] walk_system_ram_range+0x8d/0xf0
> > > [  826.055565]  [<ffffffff8106d2d8>] pat_pagerange_is_ram+0x78/0xa0
> > > [  826.088971]  [<ffffffff8106d475>] lookup_memtype+0x35/0xc0
> > > [  826.121385]  [<ffffffff8106e33b>] track_pfn_insert+0x2b/0x60
> > > [  826.154600]  [<ffffffff811e5523>] vmf_insert_pfn_pmd+0xb3/0x210
> > > [  826.187992]  [<ffffffff8124acab>] __dax_pmd_fault+0x3cb/0x610
> > > [  826.221337]  [<ffffffffa0769910>] ? ext4_dax_mkwrite+0x20/0x20 [ext4]
> > > [  826.259190]  [<ffffffffa0769a4d>] ext4_dax_pmd_fault+0xcd/0x100 [ext4]
> > > [  826.293414]  [<ffffffff811b0af7>] handle_mm_fault+0x3b7/0x510
> > > [  826.323763]  [<ffffffff81068f98>] __do_page_fault+0x188/0x3f0
> > > [  826.358186]  [<ffffffff81069230>] do_page_fault+0x30/0x80
> > > [  826.391212]  [<ffffffff8169c148>] page_fault+0x28/0x30
> > > [  826.420752] Code: 89 e5 74 09 48 83 c2 01 80 3a 00 75 f7 48 83 c6 01 0f
> > > b6 4e ff 48 83
> > >  c2 01 84 c9 88 4a ff 75 ed 5d c3 0f 1f 00 55 48 89 e5 eb 04 <84> c0 74 18
> > > 48 83 c7 01 0f
> > >  b6 47 ff 48 83 c6 01 3a 46 ff 74 eb
> > 
> > Hmm, a loop in the resource sibling list?
> > 
> > What does /proc/iomem say?
> > 
> > Not related to this bug, but lookup_memtype() looks broken for pmd
> > mappings as we only check for PAGE_SIZE instead of HPAGE_SIZE.  Which
> > will cause problems if we're straddling the end of memory.
> > 
> > > The full output is large (48 cpus), so I'm going to be lazy and not
> > > cut-n-paste it here.
> > 
> > Thanks for that ;-)
> 
> Yea, my first round of testing was broken, sorry about that.
> 
> It looks like this test causes the PMD fault handler to be called repeatedly
> over and over until you kill the userspace process.  This doesn't happen for
> XFS because when using XFS this test doesn't hit PMD faults, only PTE faults.
> 
> So, looks like a livelock as far as I can tell.
> 
> Still debugging.

I am seeing a similar/same problem in my test.  I think the problem is that in
case of a WP fault, wp_huge_pmd() -> __dax_pmd_fault() -> vmf_insert_pfn_pmd(),
which is a no-op since the PMD is mapped already.  We need WP handling for this
PMD map.

If it helps, I have attached change for follow_trans_huge_pmd().  I have not
tested much, though.

Thanks,
-Toshi
---
 include/linux/mm.h |    2 ++
 mm/gup.c           |   24 ++++++++++++++++++++++++
 mm/huge_memory.c   |    8 ++++++++
 3 files changed, 34 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 00bad77..a427b88 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1084,6 +1084,8 @@ struct zap_details {
 
 struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 		pte_t pte);
+int follow_pfn_pmd(struct vm_area_struct *vma, unsigned long address,
+		pmd_t *pmd, unsigned int flags);
 
 int zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
 		unsigned long size);
diff --git a/mm/gup.c b/mm/gup.c
index deafa2c..15135ee 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -34,6 +34,30 @@ static struct page *no_page_table(struct vm_area_struct *vma,
 	return NULL;
 }
 
+int follow_pfn_pmd(struct vm_area_struct *vma, unsigned long address,
+		pmd_t *pmd, unsigned int flags)
+{
+	/* No page to get reference */
+	if (flags & FOLL_GET)
+		return -EFAULT;
+
+	if (flags & FOLL_TOUCH) {
+		pmd_t entry = *pmd;
+
+		if (flags & FOLL_WRITE)
+			entry = pmd_mkdirty(entry);
+		entry = pmd_mkyoung(entry);
+
+		if (!pmd_same(*pmd, entry)) {
+			set_pmd_at(vma->vm_mm, address, pmd, entry);
+			update_mmu_cache_pmd(vma, address, pmd);
+		}
+	}
+
+	/* Proper page table entry exists, but no corresponding struct page */
+	return -EEXIST;
+}
+
 static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
 		pte_t *pte, unsigned int flags)
 {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c29ddeb..41b277a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1276,6 +1276,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 {
 	struct mm_struct *mm = vma->vm_mm;
 	struct page *page = NULL;
+	int ret;
 
 	assert_spin_locked(pmd_lockptr(mm, pmd));
 
@@ -1290,6 +1291,13 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 	if ((flags & FOLL_NUMA) && pmd_protnone(*pmd))
 		goto out;
 
+	/* pfn map does not have struct page */
+	if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)) {
+		ret = follow_pfn_pmd(vma, addr, pmd, flags);
+		page = ERR_PTR(ret);
+		goto out;
+	}
+
 	page = pmd_page(*pmd);
 	VM_BUG_ON_PAGE(!PageHead(page), page);
 	if (flags & FOLL_TOUCH) {

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux