Re: [f2fs-dev] [PATCH 1/2] f2fs: avoid deadlock caused by lock order of page and lock_op

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jaegeuk,

On 2017/6/25 0:25, Jaegeuk Kim wrote:
> - punch_hole
>  - fill_zero
>   - f2fs_lock_op
>   - get_new_data_page
>    - lock_page
> 
> - f2fs_write_data_pages
>  - lock_page
>  - do_write_data_page
>   - f2fs_lock_op

Good catch!

With this implementation, page writeback can fail due to concurrent checkpoint,
this will make fsync/atomic_commit which trigger synchronous write failed randomly.

How about unifying the lock order in punch_hole as one in writepages for regular
inode? We can add one more parameter in get_new_data_page to indicate whether
callee needs to lock cp_rwsem.

Thanks,

> 
> Signed-off-by: Jaegeuk Kim <jaegeuk@xxxxxxxxxx>
> ---
>  fs/f2fs/data.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 7d3af48d34a9..9141bd19a902 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -1404,8 +1404,9 @@ int do_write_data_page(struct f2fs_io_info *fio)
>  		}
>  	}
>  
> -	if (fio->need_lock == LOCK_REQ)
> -		f2fs_lock_op(fio->sbi);
> +	/* Deadlock due to between page->lock and f2fs_lock_op */
> +	if (fio->need_lock == LOCK_REQ && !f2fs_trylock_op(fio->sbi))
> +		return -EAGAIN;
>  
>  	err = get_dnode_of_data(&dn, page->index, LOOKUP_NODE);
>  	if (err)
> 




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux