Re: [PATCH V8 15/19] xfs: Directory's data fork extent counter can never overflow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 21, 2022 at 10:47:46AM +0530, Chandan Babu R wrote:
> The maximum file size that can be represented by the data fork extent counter
> in the worst case occurs when all extents are 1 block in length and each block
> is 1KB in size.
> 
> With XFS_MAX_EXTCNT_DATA_FORK_SMALL representing maximum extent count and with
> 1KB sized blocks, a file can reach upto,
> (2^31) * 1KB = 2TB
> 
> This is much larger than the theoretical maximum size of a directory
> i.e. 32GB * 3 = 96GB.
> 
> Since a directory's inode can never overflow its data fork extent counter,
> this commit replaces checking the return value of
> xfs_iext_count_may_overflow() with calls to ASSERT(error == 0).

I'd really prefer that we don't add noise like this to a bunch of
call sites.  If directories can't overflow the extent count in
normal operation, then why are we even calling
xfs_iext_count_may_overflow() in these paths? i.e. an overflow would
be a sign of an inode corruption, and we should have flagged that
long before we do an operation that might overflow the extent count.

So, really, I think you should document the directory size
constraints at the site where we define all the large extent count
values in xfs_format.h, remove the xfs_iext_count_may_overflow()
checks from the directory code and replace them with a simple inode
verifier check that we haven't got more than 100GB worth of
individual extents in the data fork for directory inodes....

Then all this directory specific "can't possibly overflow" overflow
checks can go away completely.  The best code is no code :)

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux