Re: Question regarding performance on big files.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mathieu AVILA put forth on 9/20/2010 12:04 PM:
>  Hello XFS team,
> 
> I have run into trouble with XFS, but excuse me if this question has
> been asked a dozens times.
> 
> I'm am filling a very big file on a XFS filesystem on Linux that stands
> on a software RAID 0. Performance are very good until I get 2 "holes"
> during which my write stalls for a few seconds.
> Mkfs parameters:
> mkxfs.xfs -b size 4096 -s size 4096 -d agcount=2 -i size=2048
> The RAID0 is done a 2 SATA disks of 500 GB each.

What happens when you make the filesystem using defaults?

mkfs.xfs /dev/[device]

Not sure if it is related to your issue, but your manual agcount setting
seems really low.  agcount greatly affects parallelism.  With a manual
setting of 2, you're dictating serial read/write stream behavior to/from
each drive.  This is not good.

I have a server with a single 500GB SATA drive with two XFS filesystem
partitions for data, each of 100GB, and a 35GB EXT partition for the /
filesystem.  Over half the drive space is unallocated.  Yet each XFS
filesystem has 4 default allocation groups.  If I were to create two
more 100GB filesystems, I'd end up with 16 AGs for 400GB worth of XFS
filesystems on a single 500GB drive.

meta-data=/dev/sda6    isize=256    agcount=4, agsize=6103694 blks
         =             sectsz=512   attr=2
data     =             bsize=4096   blocks=24414775, imaxpct=25
         =             sunit=0      swidth=0 blks
naming   =version 2    bsize=4096
log      =internal     bsize=4096   blocks=11921, version=2
         =             sectsz=512   sunit=0 blks, lazy-count=0
realtime =none         extsz=4096   blocks=0, rtextents=0

My suggestion would be to create the filesystem using default values and
see what you get.  2.6.18 is rather old, and I don't know if XFS picks
up the mdraid config and uses that info accordingly.  Newer versions of
XFS do this automatically and correctly, so you don't need to manually
specify anything with mkfs.xfs.

If default mkfs values still yield issues/problems, remake the
filesystem specifying '-d sw=2' and retest.

You specified '-b size=4096'.  This is the default for block size so
there's no need to specify it.

You specified '-s size=4096'.  This needs to match the sector size of
the underlying physical disk, which is 512 bytes in your case.  This may
be part of your problem as well.

You specified '-d agcount=2'.  From man mkfs.xfs:

"The data section of the filesystem is divided into _value_ allocation
groups (default value is scaled automatically based on the underlying
device size)."

My guess is that mkfs.xfs with no manual agcount forced would yield
something like 32-40 allocations groups on your RAID0 1TB XFS
filesystem.  Theoretically, this should boost your performance 16-20
times over your current agcount setting of 2 allocation groups.  In
reality the boost won't be nearly that great, but your performance
should be greatly improved nonetheless.

-- 
Stan


_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux