This commit, which has not yet made it upstream but is in the -mm tree: dax: Fix race between colliding PMD & PTE entries fixed a pair of race conditions where racing DAX PTE and PMD faults could corrupt page tables. This fix had two shortcomings which are addressed by this patch: 1) In the PTE fault handler we only checked for a collision using pmd_devmap(). The pmd_devmap() check will trigger when we have raced with a PMD that has real DAX storage, but to account for the case where we collide with a huge zero page entry we also need to check for pmd_trans_huge(). 2) In the PMD fault handler we only continued with the fault if no PMD at all was present (pmd_none()). This is the case when we are faulting in a PMD for the first time, but there are two other cases to consider. The first is that we are servicing a write fault over a PMD huge zero page, which we detect with pmd_trans_huge(). The second is that we are servicing a write fault over a DAX PMD with real storage, which we address with pmd_devmap(). Fix both of these, and instead of manually triggering a fallback in the PMD collision case instead be consistent with the other collision detection code in the fault handlers and just retry. Signed-off-by: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx> Cc: stable@xxxxxxxxxxxxxxx --- For both the -mm tree and for stable, feel free to squash this with the original commit if you think that is appropriate. This has passed targeted testing and an xfstests run. --- fs/dax.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index fc62f36..2a6889b 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1160,7 +1160,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, * the PTE we need to set up. If so just return and the fault will be * retried. */ - if (pmd_devmap(*vmf->pmd)) { + if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) { vmf_ret = VM_FAULT_NOPAGE; goto unlock_entry; } @@ -1411,11 +1411,14 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, /* * It is possible, particularly with mixed reads & writes to private * mappings, that we have raced with a PTE fault that overlaps with - * the PMD we need to set up. If so we just fall back to a PTE fault - * ourselves. + * the PMD we need to set up. If so just return and the fault will be + * retried. */ - if (!pmd_none(*vmf->pmd)) + if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) && + !pmd_devmap(*vmf->pmd)) { + result = 0; goto unlock_entry; + } /* * Note that we don't use iomap_apply here. We aren't doing I/O, only -- 2.9.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>