Question regarding performance on big files.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello XFS team,

I have run into trouble with XFS, but excuse me if this question has been asked a dozens times.

I'm am filling a very big file on a XFS filesystem on Linux that stands on a software RAID 0. Performance are very good until I get 2 "holes" during which my write stalls for a few seconds.
Mkfs parameters:
mkxfs.xfs -b size 4096 -s size 4096 -d agcount=2 -i size=2048
The RAID0 is done a 2 SATA disks of 500 GB each.

My test is just running "dd" with 8M blocks:
dd if=/dev/zero of=/DATA/big
(/DATA is the XFS file system)

The system is basically a RHEL5 with a 2.6.18 kernel and XFS packages from CentOS.

The problem happens 2 times: one time around 210 GB and the second time around 688 GB (hole in performance and response time is bigger the second time -- around 20 seconds)

Do you have any clue ? Do my mkfs parameters make sense ? The goal here is really to have something that is able to store big files at a constant throughput -- the test is done on purpose.

--
Mathieu Avila
IT & Integration Engineer
mathieu.avila@xxxxxxxxxxxxxxxx

OpenCube Technologies http://www.opencubetech.com
Parc Technologique du Canal, 9 avenue de l'Europe
31520 Ramonville St Agne - FRANCE
Tel. : +33 (0) 561 285 606 - Fax : +33 (0) 561 285 635
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux