Dave Chinner <david@xxxxxxxxxxxxx> 于2024年11月4日周一 11:32写道: > > On Mon, Nov 04, 2024 at 09:44:34AM +0800, zhangshida wrote: > > From: Shida Zhang <zhangshida@xxxxxxxxxx> > > > > Hi all, > > > > Recently, we've been encounter xfs problems from our two > > major users continuously. > > They are all manifested as the same phonomenon: a xfs > > filesystem can't touch new file when there are nearly > > half of the available space even with sparse inode enabled. > > What application is causing this, and does using extent size hints > make the problem go away? > Both are database-like applications, like mysql. Their source code isn't available to us. And I doubt if they have the ability to modify the database source code... > Also, xfs_info and xfs_spaceman free space histograms would be > useful information. > There are two such cases. In one case: $ xfs_info disk.img meta-data=disk.img isize=512 agcount=344, agsize=1638400 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=563085312, imaxpct=25 = sunit=64 swidth=64 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=12800, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 $ xfs_db -c freesp disk.img from to extents blocks pct 1 1 43375262 43375262 22.32 2 3 64068226 150899026 77.66 4 7 1 5 0.00 32 63 3 133 0.00 256 511 1 315 0.00 1024 2047 1 1917 0.00 8192 16383 2 20477 0.01 Another was mentioned already by one of my teammates. See: https://lore.kernel.org/linux-xfs/173053338963.1934091.14116776076321174850.b4-ty@xxxxxxxxxx/T/#t [root@localhost ~]# xfs_db -c freesp /dev/vdb from to extents blocks pct 1 1 215 215 0.01 2 3 994476 1988952 99.99 > > It turns out that the filesystem is too fragmented to have > > enough continuous free space to create a new file. > > > Life still has to goes on. > > But from our users' perspective, worse than the situation > > that xfs is hard to use is that xfs is non-able to use, > > since even one single file can't be created now. > > > > So we try to introduce a new space allocation algorithm to > > solve this. > > > > To achieve that, we try to propose a new concept: > > Allocation Fields, where its name is borrowed from the > > mathmatical concepts(Groups,Rings,Fields), will be > > I have no idea what this means. We don't have rings or fields, > and an allocation group is simply a linear address space range. > Please explain this concept (pointers to definitions and algorithms > appreciated!) > > > > abbrivated as AF in the rest of the article. > > > > what is a AF? > > An one-pic-to-say-it-all version of explaination: > > > > |<--------+ af 0 +-------->|<--+ af 1 +-->| af 2| > > |------------------------------------------------+ > > | ag 0 | ag 1 | ag 2 | ag 3| ag 4 | ag 5 | ag 6 | > > +------------------------------------------------+ > > > > A text-based definition of AF: > > 1.An AF is a incore-only concept comparing with the on-disk > > AG concept. > > 2.An AF is consisted of a continuous series of AGs. > > 3.Lower AFs will NEVER go to higher AFs for allocation if > > it can complete it in the current AF. > > > > Rule 3 can serve as a barrier between the AF to slow down > > the over-speed extending of fragmented pieces. > > To a point, yes. But it's not really a reliable solution, because > directories are rotored across all AGs. Hence if the workload is > running across multiple AGs, then all of the AFs can be being > fragmented at the same time. > You mean the inode of the directory is expected to be distributed evenly over the entire system, and the file extent of that directory will be distributed in the same way? The ideal layout of af to be constructed is to limit the higher af in the small part of the entire [0, agcount). Like: |<-----+ af 0 +----->|<af 1>| |---------------------------- | ag 0 | ag 1 | ag 2 | ag 3 | +---------------------------- So for much of the ags(0, 1, 2) in af 0, that will not be a problem. And for the ag in the small part, like ag 3. if there is inode in ag3, and there comes the space allocation of the inode, it will not find space in ag 3 first. It will still search from the af0 to af1, whose logic is reflected in the patch: [PATCH 4/5] xfs: add infrastructure to support AF allocation algorithm it says: + /* if start_agno is not in current AF range, make it be. */ + if ((start_agno < start_af) || (start_agno > end_af)) + start_agno = start_af; which means, the start_agno will not be used to comply with locality principle. In general, the evenly distributed layout is slightly broken, but only for the last small AG, if you choose the AF layout properly. > Given that I don't know how an application controls what AF it's > files are located in, I can't really say much more than that. > > > With these patches applied, the code logic will be exactly > > the same as the original code logic, unless you run with the > > extra mount opiton. For example: > > mount -o af1=1 $dev $mnt > > > > That will change the default AF layout: > > > > |<--------+ af 0 +--------->| > > |---------------------------- > > | ag 0 | ag 1 | ag 2 | ag 3 | > > +---------------------------- > > > > to : > > > > |<-----+ af 0 +----->|<af 1>| > > |---------------------------- > > | ag 0 | ag 1 | ag 2 | ag 3 | > > +---------------------------- > > > > So the 'af1=1' here means the start agno is one ag away from > > the m_sb.agcount. > > Yup, so kinda what we did back in 2006 in a proprietary SGI NAS > product with "concat groups" to create aggregations of allocation > groups that all sat on the same physical RAID5 luns in a linear > concat volume. They were fixed size, because the (dozens of) luns > were all the same size. This construct was heavily tailored to > maximising the performance provided by the underlying storage > hardware architecture, so wasn't really a general policy solution. > > To make it work, we also had to change how various other allocation > distribution algorithms worked (e.g. directory rotoring) so that > the load was distributed more evenly across the physical hardware > backing the filesystem address space. > > I don't see anything like that in this patch set - there's no actual > control mechanism to select what AF an inode lands in. how does an > applicaiton or user actually use this reliably to prevent all the > AFs being fragmented by the workload that is running? > > > 3.Lower AFs will NEVER go to higher AFs for allocation if > > it can complete it in the current AF.