Using the latest, stable versions of LVM2 and xfsprogs and the 2.6.35.4
kernel, I am setting up lvm on a 16 drive, 256k chunk md RAID6, which
has been used to date with XFS directly on the RAID.
mkfs.xfs directly on the RAID gives:
meta-data=/dev/md8 isize=256 agcount=32,
agsize=106814656 blks
= sectsz=4096 attr=2
data = bsize=4096 blocks=3418068864, imaxpct=5
= sunit=64 swidth=896 blks
naming =version 2 bsize=4096 ascii-ci=0
which gives the correct sunit and swidth values for the array.
Creating an lv which uses the entire array and mkfs.xfs on that, gives:
meta-data=/dev/vg_local/Storage isize=256 agcount=13,
agsize=268435455 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=3418067968, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
Limited testing using dd and bonnie++ shows no difference in write
performance whether I use sunit=64/swidth=896 or sunit=0/swidth=0 on the lv.
My gut reaction is that I should be using 64/896 but maybe mkfs.xfs
knows better?
Regards,
Richard
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs