Re: Any way to slow down fragmentation ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well .. it seems I missed the most important part of the FAQ, thank for pointing it. As you stated, playing with xfs_bmap shows that the 13TB file is fragmented a lot, xfs_fsr is now working on it.

Any hints about sector size ? Regarding the workload, my point would be that use 4k could not hurts here.

Thanks,

Cédric

-- 
Cédric Lemarchand
IT Infrastructure Manager
iXBlue
34 rue de la Croix de Fer
78100 St Germain en Laye
France
Tel. +33 1 30 08 88 88
Mob. +33 6 37 23 40 93
Fax +33 1 30 08 88 00

On 14 Oct 2015, at 00:04, Eric Sandeen <sandeen@xxxxxxxxxxx> wrote:



On 10/13/15 4:54 PM, Cédric Lemarchand wrote:
I think I actually have very bad fragmentation values, which
unfortunately involve performances drop by an order of magnitude of
3x/4x. A defrag is actually running, but it's really really slow, to the
point that I will need to constantly defrag the partition, which is not
optimal. There are approximatively 500Go written sequentially every day,
and almost 10/12T random writes every week due to backup files rotations.

Does anything besides the xfs_db "frag" command make you think that
fragmentation is a problem?  See below...

The partition has been formated with default options, over LVM (one
VG/one LV).

Here are somes questions :

- is there mkfs.xfs or mounting options that could reduce the
fragmentation over the time ?
- the backup software writes use blocks size of ~4MB, as the previous
question, any options to optimize differents layers (LVM & XFS) ? The
underlaying FS could handle 1MB block size, should I set this value for
XFS too ? do I need to play with "su" and "sw" as stated in the FAQ ?

I admit that there are so many options that I am a bit lost.

Thanks,

Cédric

--
Some informations : VM running Debian Jessie, underlaying storage is
software raid (ZFS).


df -k
Filesystem            1K-blocks        Used   Available Use% Mounted on
/dev/mapper/VG2-LV1 53685000192 40921853928 12763146264  77% /vrepo1

xfs_db -r /dev/VG2/LV1 -c frag
actual 4222, ideal 137, fragmentation factor 96.76%

http://xfs.org/index.php/XFS_FAQ#Q:_The_xfs_db_.22frag.22_command_says_I.27m_over_50.25._Is_that_bad.3F

So in 137 files, you have 4222 extents, or an average of
about 30 extents per file.

Or put another way, you have 39026 gigabytes used, in
4222 extents, for an average of 9 gigabytes per extent.

Those don't sound like problematic numbers.

xfs_bmap on an individual file will show you its mapping.
But for files of several hundred gigs, having several
very large extents really is not a problem.

I think the xfs_db frag command may be misleading you about
where the problem lies.

Of course it's possible that all but one of your files is
well laid out, and that last file is horribly, horribly
fragmented.  But the top-level numbers don't tell us whether
that might be the case.

-Eric

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux