Re: [PATCH 0/5] *** Introduce new space allocation algorithm ***

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 04, 2024 at 09:44:34AM +0800, zhangshida wrote:
> From: Shida Zhang <zhangshida@xxxxxxxxxx>
> 
> Hi all,
> 
> Recently, we've been encounter xfs problems from our two
> major users continuously.
> They are all manifested as the same phonomenon: a xfs 
> filesystem can't touch new file when there are nearly
> half of the available space even with sparse inode enabled.

What application is causing this, and does using extent size hints
make the problem go away?

Also, xfs_info and xfs_spaceman free space histograms would be
useful information.

> It turns out that the filesystem is too fragmented to have
> enough continuous free space to create a new file.

> Life still has to goes on. 
> But from our users' perspective, worse than the situation
> that xfs is hard to use is that xfs is non-able to use, 
> since even one single file can't be created now. 
> 
> So we try to introduce a new space allocation algorithm to
> solve this.
> 
> To achieve that, we try to propose a new concept:
>    Allocation Fields, where its name is borrowed from the 
> mathmatical concepts(Groups,Rings,Fields), will be 

I have no idea what this means. We don't have rings or fields,
and an allocation group is simply a linear address space range.
Please explain this concept (pointers to definitions and algorithms
appreciated!)


> abbrivated as AF in the rest of the article. 
> 
> what is a AF?
> An one-pic-to-say-it-all version of explaination:
> 
> |<--------+ af 0 +-------->|<--+ af 1 +-->| af 2|
> |------------------------------------------------+
> | ag 0 | ag 1 | ag 2 | ag 3| ag 4 | ag 5 | ag 6 |
> +------------------------------------------------+
> 
> A text-based definition of AF:
> 1.An AF is a incore-only concept comparing with the on-disk
>   AG concept.
> 2.An AF is consisted of a continuous series of AGs. 
> 3.Lower AFs will NEVER go to higher AFs for allocation if 
>   it can complete it in the current AF.
> 
> Rule 3 can serve as a barrier between the AF to slow down
> the over-speed extending of fragmented pieces. 

To a point, yes. But it's not really a reliable solution, because
directories are rotored across all AGs. Hence if the workload is
running across multiple AGs, then all of the AFs can be being
fragmented at the same time.

Given that I don't know how an application controls what AF it's
files are located in, I can't really say much more than that.

> With these patches applied, the code logic will be exactly
> the same as the original code logic, unless you run with the
> extra mount opiton. For example:
>    mount -o af1=1 $dev $mnt
> 
> That will change the default AF layout:
> 
> |<--------+ af 0 +--------->| 
> |----------------------------
> | ag 0 | ag 1 | ag 2 | ag 3 |
> +----------------------------
> 
> to :
> 
> |<-----+ af 0 +----->|<af 1>| 
> |----------------------------
> | ag 0 | ag 1 | ag 2 | ag 3 |
> +----------------------------
> 
> So the 'af1=1' here means the start agno is one ag away from
> the m_sb.agcount.

Yup, so kinda what we did back in 2006 in a proprietary SGI NAS
product with "concat groups" to create aggregations of allocation
groups that all sat on the same physical RAID5 luns in a linear
concat volume. They were fixed size, because the (dozens of) luns
were all the same size. This construct was heavily tailored to
maximising the performance provided by the underlying storage
hardware architecture, so wasn't really a general policy solution.

To make it work, we also had to change how various other allocation
distribution algorithms worked (e.g. directory rotoring) so that
the load was distributed more evenly across the physical hardware
backing the filesystem address space.

I don't see anything like that in this patch set - there's no actual
control mechanism to select what AF an inode lands in.  how does an
applicaiton or user actually use this reliably to prevent all the
AFs being fragmented by the workload that is running?

> We did some tests verify it. You can verify it yourself
> by running the following the command:
> 
> 1. Create an 1g sized img file and formated it as xfs:
>   dd if=/dev/zero of=test.img bs=1M count=1024
>   mkfs.xfs -f test.img
>   sync
> 2. Make a mount directory:
>   mkdir mnt
> 3. Run the auto_frag.sh script, which will call another scripts
>   frag.sh. These scripts will be attached in the mail. 
>   To enable the AF, run:
>     ./auto_frag.sh 1
>   To disable the AF, run:
>     ./auto_frag.sh 0
> 
> Please feel free to communicate with us if you have any thoughts
> about these problems.

We already have inode/metadata preferred allocation groups that
are avoided for data allocation if at all possible. This is how we
keep space free below 1TB for inodes when the inode32 allocator has
been selected. See xfs_perag_prefers_metadata().

Perhaps being able to control this preference from userspace (e.g.
via xfs_spaceman commands through ioctls and/or sysfs knobs) would
acheive the same results with a minimum of code and/or policy
changes. i.e. if AG0 is preferred for metadata rather than data,
we won't allocate data in it until all higher AGs are largely full.

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux