Re: [PATCH 0/2] xfs: make cluster size tunnable for sparse allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 16, 2024 at 09:05:47PM +0800, Tianxiang Peng wrote:
> This patch series makes inode cluster size a tunnable parameter in
> mkfs.xfs when sparse allocation is enabled, and also makes xfs use
> inode cluster size directly from the superblock read in rather than
> recalculate itself and verify.
> 
> Under extreme fragmentation situations, even inode sparse allocation
> may fail with current default inode cluster size i.e. 8192 bytes. Such
> situations may come from the PUNCH_HOLE fallocation which is used by
> some applications, for example MySQL innodb page compression. With xfs
> of 4K blocksize, MySQL may write out 16K buffer with direct I/O(which
> immediately triggers block allocation) then try to compress the 16K
> buffer to <4K. If the compression succeeds, MySQL will punch out the
> latter 12K, leave only the first 4K allocated:
> 	after write 16k buffer: OOOO
> 	after punch latter 12K: OXXX
> where O means page with block allocated, X means page without.
> 
> Such feature saves disk space(the 12K freed by punching can be used
> by others), but also makes the filesystem much more fragmented.
> Considering xfs has no automatic defragmentation mechanism, in the
> most extreme cases, there will be only 1-3 physically continuous
> blocks finally avaliable.
> 
> For data block allocation, such fragmentation is not a problem, as
> physical continuation is not always required. But inode chunk
> allocation requires so. Even for sparse allocation, physical
> continuation has also to be guaranteed in a way. Currently this
> value is calculated from a scaled inode cluster size. In xfs, inodes
> are manipulated(e.g. read in, logged, written back) in cluster, and
> the size of that cluster is just the inode cluster size. Sparse
> allocation unit currently is calculated from that:
> 	(inode size / MIN_INODE_SIZE) * inode cluster size
> 		-> sparse allocation aligmnet
> 			-> sparse allocation unit
> For example, under default mkfs configuration(i.e. crc and sparse
> allocation enabled, 4K blocksize), inode size is 512 bytes(2 times
> of MIN_INODE_SIZE=256 bytes), then sparse allocation unit will be
> 2 * current inode cluster size(8192 bytes) = 16384 bytes, that is
> 4 blocks. As we mentioned above, under extreme fragmentation, the
> filesystem may be full of 1-3 physically continuous blocks but can
> never find one of 4, so even sparese allocation will also fail. If
> we know application will easily create such fragmentation, then we
> had better have a way to loose sparse allocation requirement manually.

Please go an study mkfs.xfs -i align=1 does, how it affects
sb->s_inoalignmnt, and how that then affects sparse inode cluster
size and alignment. i.e.  Sparse inode clusters must be correctly
aligned and they have a fixed minimum size, so we can't just
arbitrarily select a sparse cluster size like these patches enable a
user to do.

> This patch series achieves that by making the source of sparse
> allocation unit, inode cluster size a tunnable parameter.

Fundamentally, I think this is the wrong way to solve the
problem because it requires the system admin to know ahead of time
that this specific database configuration is going to cause
fragmentation and inode allocation issues.

Once the problem manifests, it is too late to run mkfs to change the
geometry for the fs, so we really need to change the runtime
allocation policy code to minimise the impact of the data
fragmentation as much as possible.

As to that policy change, it has been discussed here:

https://lore.kernel.org/linux-xfs/20241104014439.3786609-1-zhangshida@xxxxxxxxxx/

and my preferred generic solution to the problem is to define the
high AG space as metadata preferred, thereby preventing data
allocation from occurring in it until all other AGs are full of
data.

I'm still waiting to hear back as to whether the inode32 algorithm
behaves as expected (which uses metadata preferred AGs to direct
data to fill high AGs first) before we move forward with a
allocation policy based fix for this workload issue. If you can
reproduce the issue on demand, then perhaps you could also run the
same experiement - build a 2TB filesystem with ~300 AGs mounted
with inode32 and demonstrate that the upper AGs are filled to near
full before data spills to the lower AGs.

If inode32 behaves as it should under the mysql workload, then it
seems like a relatively trival tweak to the AG setup at mount time
to always reserve some space in the high AG(s) for inode allocation
and hence largely mitigate this problem for everyone....

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux