On Fri, Sep 10, 2010 at 10:58:22AM +1200, Richard Scobie wrote: > Using the latest, stable versions of LVM2 and xfsprogs and the > 2.6.35.4 kernel, I am setting up lvm on a 16 drive, 256k chunk md > RAID6, which has been used to date with XFS directly on the RAID. > > mkfs.xfs directly on the RAID gives: > > meta-data=/dev/md8 isize=256 agcount=32, > agsize=106814656 blks > = sectsz=4096 attr=2 > data = bsize=4096 blocks=3418068864, imaxpct=5 > = sunit=64 swidth=896 blks > naming =version 2 bsize=4096 ascii-ci=0 > > which gives the correct sunit and swidth values for the array. > > Creating an lv which uses the entire array and mkfs.xfs on that, gives: > > meta-data=/dev/vg_local/Storage isize=256 agcount=13, > agsize=268435455 blks > = sectsz=512 attr=2 > data = bsize=4096 blocks=3418067968, imaxpct=5 > = sunit=0 swidth=0 blks > naming =version 2 bsize=4096 ascii-ci=0 Hmmm - it's treating MD very differently to the LVM volume - different numbers of AGs, different sunit/swdith. Did you build xfsprogs yourself? Is it linked against libblkid or libdisk? Or it might be that LVM is not exporting the characteristic of the underlying volume. Can you check if there are different parameter values exported by the two devices in /sys/block/<dev>/queue? > Limited testing using dd and bonnie++ shows no difference in write > performance whether I use sunit=64/swidth=896 or sunit=0/swidth=0 on > the lv. These benchmarks won't realy show any difference on an empty filesystem. It will have an impact on how the filesystems age and how well aligned the IO will be to the underlying device under more complex workloads... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs