When a fault to a hole races with write filling the hole, it can happen that block zeroing in __dax_fault() overwrites the data copied by write. Since filesystem is supposed to provide pre-zeroed blocks for fault anyway, just remove the racy zeroing from dax code. The only catch is with read-faults over unwritten block where __dax_fault() filled in the block into page tables anyway. For that case we have to fall back to using hole page now. Signed-off-by: Jan Kara <jack@xxxxxxx> --- fs/dax.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index d496466652cd..50d81172438b 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -582,11 +582,6 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, error = PTR_ERR(dax.addr); goto out; } - - if (buffer_unwritten(bh) || buffer_new(bh)) { - clear_pmem(dax.addr, PAGE_SIZE); - wmb_pmem(); - } dax_unmap_atomic(bdev, &dax); error = dax_radix_entry(mapping, vmf->pgoff, dax.sector, false, @@ -665,7 +660,7 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf, if (error) goto unlock_page; - if (!buffer_mapped(&bh) && !buffer_unwritten(&bh) && !vmf->cow_page) { + if (!buffer_mapped(&bh) && !vmf->cow_page) { if (vmf->flags & FAULT_FLAG_WRITE) { error = get_block(inode, block, &bh, 1); count_vm_event(PGMAJFAULT); @@ -950,8 +945,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, } if (buffer_unwritten(&bh) || buffer_new(&bh)) { - clear_pmem(dax.addr, PMD_SIZE); - wmb_pmem(); count_vm_event(PGMAJFAULT); mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); result |= VM_FAULT_MAJOR; -- 2.6.2 -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html