Re: XFS unlink still slow on 3.1.9 kernel ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dave, hi list,

first thanks for the very detailed reply. Please find below my comments
and questions.

On 02/14/2012 01:09 AM, Dave Chinner wrote:
> On Mon, Feb 13, 2012 at 05:57:58PM +0100, Richard Ems wrote:
>> I am running openSUSE 12.1, kernel 3.1.9-1.4-default. The 20 TB XFS
>> partition is 100% full
> 
> Running filesystems to 100% full is always a bad idea - it causes
> significant increases in fragementation of both data and metadata
> compared to a filesystem that doesn't get past ~90% full.

Yes, true, I know. But I have no other free space for this backups. I am
waiting for a new already ordered system and will have 4 times this
space. So later I will open a new thread asking if my thoughts for
creating this new 80 TB XFS partition are right.



>> I am asking because I am seeing very long times while removing big
>> directory trees. I thought on kernels above 3.0 removing dirs and files
>> had improved a lot, but I don't see that improvement.
> 
> You won't if the directory traversal is seek bound and that is the
> limiting factor for performance.

*Seek bound*? *When* is the directory traversal *seek bound*?


>> This is a backup system running dirvish, so most files in the dirs I am
>> removing are hard links. Almost all of the files do have ACLs set.
> 
> The unlink will have an extra IO to read per inode - the out-of-line
> attribute block, so you've just added 11 million IOs to the 800,000
> the traversal already takes to the unlink overhead. So it's going to
> take roughly ten hours because the unlink is gong to be read IO seek
> bound....

It took 110 minutes and not 10 hours. All files and dirs there had ACLs set.


> Christophs suggestions to use larger inodes to keep the attribute
> data inline is a very good one - whenever you have a workload that
> is attribute heavy you should use larger inodes to try to keep the
> attributes in-line if possible. The down side is that increasing the
> inode size increases the amount of IO required to read/write inodes,
> though this typically isn't a huge penalty compared to the penalty
> of out-of-line attributes.

I will use larger inodes always from now on, since we largely use ACLs
on our XFS partitions.


> Also, for large directories like this (millions of entries) you
> should also consider using a larger directory block size (mkfs -n
> size=xxxx option) as that can be scaled independently to the
> filesystem block size. This will significantly decrease the amount
> of IO and fragmentation large directories cause. Peak modification
> performance of small directories will be reduced because larger
> block size directories consume more CPU to process, but for large
> directories performance will be significantly better as they will
> spend much less time waiting for IO.

This was not ONE directory with that many files, but a directory
containing 834591 subdirectories (deeply nested, not all in the same
dir!) and 10539154 files.

Many thanks,
Richard


-- 
Richard Ems       mail: Richard.Ems@xxxxxxxxxxxxxxxxx

Cape Horn Engineering S.L.
C/ Dr. J.J. Dómine 1, 5º piso
46011 Valencia
Tel : +34 96 3242923 / Fax 924
http://www.cape-horn-eng.com

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux