Re: Correctness of inode_dio_end in generic DIO code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Tue 20-02-18 10:59:46, Nikolay Borisov wrote:
> Currently the generic DIO code calls inode_dio_begin/inode_dio_end if
> DIO_SKIP_DIO_COUNT is not set. However, te generic ode doesn't really
> know if there is a lock synchronizing all the various inode_dio_*
> operations. As per inode_dio_wait comment :
> 
> 
> Must be called under a lock that serializes taking new references to
> i_dio_count, usually by inode->i_mutex.
> 
> So is it at all correct to increment i_dio_count in generic dio code
> without imposing strict locking requirement? Currently, most major
> filesystems (Ext4/xfs/btrfs) do modify i_dio_count under their own
> locks. Perhaps it's best if i_dio_count modification are removed from
> the generic code, what do people think about that?

Currently the onus is on inode_dio_wait() callers to make sure they cannot
livelock (usually by calling that function in a context which blocks
submission of new direct IO). So in this sense I don't see anything wrong
with calling inode_dio_begin() from do_blockdev_direct_IO(). Whether
calling these functions directly from fs code instead of from
do_blockdev_direct_IO() to make things clearer is worth the additional
code in quite a few filesystems is IMHO a matter of taste. I'm fine with
the current state but then I admit I might have just got used to it :)

							Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux