3.16.36-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Dave Chinner <dchinner@xxxxxxxxxx> commit ec56b1f1fdc69599963574ce94cc5693d535dd64 upstream. Lock ordering for the new mmap lock needs to be: mmap_sem sb_start_pagefault i_mmap_lock page lock <fault processsing> Right now xfs_vm_page_mkwrite gets this the wrong way around, While technically it cannot deadlock due to the current freeze ordering, it's still a landmine that might explode if we change anything in future. Hence we need to nest the locks correctly. Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> Reviewed-by: Jan Kara <jack@xxxxxxx> Reviewed-by: Brian Foster <bfoster@xxxxxxxxxx> Signed-off-by: Dave Chinner <david@xxxxxxxxxxxxx> Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: xfs@xxxxxxxxxxx --- fs/xfs/xfs_file.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -1441,15 +1441,20 @@ xfs_filemap_page_mkwrite( struct vm_fault *vmf) { struct xfs_inode *ip = XFS_I(vma->vm_file->f_mapping->host); - int error; + int ret; trace_xfs_filemap_page_mkwrite(ip); + sb_start_pagefault(VFS_I(ip)->i_sb); + file_update_time(vma->vm_file); xfs_ilock(ip, XFS_MMAPLOCK_SHARED); - error = block_page_mkwrite(vma, vmf, xfs_get_blocks); + + ret = __block_page_mkwrite(vma, vmf, xfs_get_blocks); + xfs_iunlock(ip, XFS_MMAPLOCK_SHARED); + sb_end_pagefault(VFS_I(ip)->i_sb); - return error; + return block_page_mkwrite_return(ret); } const struct file_operations xfs_file_operations = { _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs