Hello,
Thank you for your quick answer.
I have run my test again with default parameters for mkfs.
I still have this issue. For 20 seconds, the writes are either stalled,
or very slow.
I have run "vmstat" at the same time than "dd", and it appears that the
block device continues to receive write requests, while "dd" is blocked
in the kernel.
With blktrace, I can see that during this period of time, the block
receives a lot of small write requests throughout the volume ranging
from the start till the point where the file has stopped writing. During
the other periods of time, the volume is written normally, starting at
offset 0 and filling the disk continuously.
Could this be an effect of tree rebalancing for extents management (both
inode of big file and free space trees) ? Can it be a hardware problem ?
Have you ever seen that issue before ?
--
Mathieu Avila
Le 20/09/2010 21:48, Stan Hoeppner a écrit :
Mathieu AVILA put forth on 9/20/2010 12:04 PM:
Hello XFS team,
I have run into trouble with XFS, but excuse me if this question has
been asked a dozens times.
I'm am filling a very big file on a XFS filesystem on Linux that stands
on a software RAID 0. Performance are very good until I get 2 "holes"
during which my write stalls for a few seconds.
Mkfs parameters:
mkxfs.xfs -b size 4096 -s size 4096 -d agcount=2 -i size=2048
The RAID0 is done a 2 SATA disks of 500 GB each.
What happens when you make the filesystem using defaults?
mkfs.xfs /dev/[device]
Not sure if it is related to your issue, but your manual agcount setting
seems really low. agcount greatly affects parallelism. With a manual
setting of 2, you're dictating serial read/write stream behavior to/from
each drive. This is not good.
I have a server with a single 500GB SATA drive with two XFS filesystem
partitions for data, each of 100GB, and a 35GB EXT partition for the /
filesystem. Over half the drive space is unallocated. Yet each XFS
filesystem has 4 default allocation groups. If I were to create two
more 100GB filesystems, I'd end up with 16 AGs for 400GB worth of XFS
filesystems on a single 500GB drive.
meta-data=/dev/sda6 isize=256 agcount=4, agsize=6103694 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=24414775, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=11921, version=2
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0
My suggestion would be to create the filesystem using default values and
see what you get. 2.6.18 is rather old, and I don't know if XFS picks
up the mdraid config and uses that info accordingly. Newer versions of
XFS do this automatically and correctly, so you don't need to manually
specify anything with mkfs.xfs.
If default mkfs values still yield issues/problems, remake the
filesystem specifying '-d sw=2' and retest.
You specified '-b size=4096'. This is the default for block size so
there's no need to specify it.
You specified '-s size=4096'. This needs to match the sector size of
the underlying physical disk, which is 512 bytes in your case. This may
be part of your problem as well.
You specified '-d agcount=2'. From man mkfs.xfs:
"The data section of the filesystem is divided into _value_ allocation
groups (default value is scaled automatically based on the underlying
device size)."
My guess is that mkfs.xfs with no manual agcount forced would yield
something like 32-40 allocations groups on your RAID0 1TB XFS
filesystem. Theoretically, this should boost your performance 16-20
times over your current agcount setting of 2 allocation groups. In
reality the boost won't be nearly that great, but your performance
should be greatly improved nonetheless.
--
*Mathieu Avila*
IT & Integration Engineer
mathieu.avila@xxxxxxxxxxxxxxxx
OpenCube Technologies http://www.opencubetech.com
Parc Technologique du Canal, 9 avenue de l'Europe
31520 Ramonville St Agne - FRANCE
Tel. : +33 (0) 561 285 606 - Fax : +33 (0) 561 285 635
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs