On Mon, May 03, 2010 at 09:53:44AM +0530, Nikanth Karthikesan wrote: > On Saturday 01 May 2010 12:34:26 Amit K. Arora wrote: > > On Fri, Apr 30, 2010 at 02:33:19PM -0700, Andrew Morton wrote: > > > Also, there doesn't seem to be much point in doing > > > > > > mutex_lock(i_mutex); > > > if (some_condition) > > > bale out > > > mutex_unlock(i_mutex); > > > > > > <stuff> > > > > > > because `some_condition' can now become true before or during the > > > execution of `stuff'. > > > > > > IOW, it's racy. > > oh, yes. :( > > > Agreed. How about doing this check in the filesystem specific fallocate > > inode routines instead ? For example, in ext4 we could do : > > I guess, calling the filesystem specific fallocate with the lock held would > create lock ordering problems? If so, this might be the only way. But it would > be better to document at the call site, that the callee should check for > RLIMIT_FSIZE. Hmm.. I never said to call the filesystem specific fallocate with i_mutex held. What I suggested was that each filesystem at some point anyhow takes the i_mutex to preallocate. Thats where the check should be, to avoid the race. This is what the example patch below does. -- Regards, Amit Arora > Thanks > Nikanth > > > diff -Nuarp linux-2.6.org/fs/ext4/extents.c linux-2.6.new/fs/ext4/extents.c > > --- linux-2.6.org/fs/ext4/extents.c 2010-05-01 12:16:07.000000000 +0530 > > +++ linux-2.6.new/fs/ext4/extents.c 2010-05-01 12:17:37.000000000 +0530 > > @@ -3672,6 +3672,11 @@ long ext4_fallocate(struct inode *inode, > > */ > > credits = ext4_chunk_trans_blocks(inode, max_blocks); > > mutex_lock(&inode->i_mutex); > > + ret = inode_newsize_ok(inode, (offset + len)); > > + if (ret) { > > + mutex_unlock(&inode->i_mutex); > > + return ret; > > + } > > retry: > > while (ret >= 0 && ret < max_blocks) { > > block = block + ret; > > > > > > Similarly for ocfs2, btrfs and xfs.. > > > > -- > > Regards, > > Amit Arora > > -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html