On Thu, May 12, 2016 at 06:29:15PM +0200, Jan Kara wrote: > Currently the handling of huge pages for DAX is racy. For example the > following can happen: > > CPU0 (THP write fault) CPU1 (normal read fault) > > __dax_pmd_fault() __dax_fault() > get_block(inode, block, &bh, 0) -> not mapped > get_block(inode, block, &bh, 0) > -> not mapped > if (!buffer_mapped(&bh) && write) > get_block(inode, block, &bh, 1) -> allocates blocks > truncate_pagecache_range(inode, lstart, lend); > dax_load_hole(); > > This results in data corruption since process on CPU1 won't see changes > into the file done by CPU0. > > The race can happen even if two normal faults race however with THP the > situation is even worse because the two faults don't operate on the same > entries in the radix tree and we want to use these entries for > serialization. So make THP support in DAX code depend on CONFIG_BROKEN > for now. > > Signed-off-by: Jan Kara <jack@xxxxxxx> Reviewed-by: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html