On Friday 2011-03-11 02:10, Dave Chinner wrote: >On Thu, Mar 10, 2011 at 03:14:34PM +0100, Jan Engelhardt wrote: >> >> Was there something I missed? >> >> # xfs_info / >> meta-data=/dev/md3 isize=256 agcount=32, >> agsize=11429117 blks >> = sectsz=512 attr=2 >> data = bsize=4096 blocks=365731739, >> imaxpct=5 >> = sunit=0 swidth=0 blks >> naming =version 2 bsize=4096 ascii-ci=0 >> log =internal bsize=4096 blocks=32768, version=2 >> = sectsz=512 sunit=0 blks, lazy-count=0 > ^^^^^^^^^^^^ >> realtime =none extsz=4096 blocks=0, rtextents=0 > >You're using an old mkfs? As mentioned, this is a preexisting fs. This was created in August 2009 using xfsprogs 2.10.1. >At minimum, this should have lazy-count=1. >I'm also wondering about the fact this is a MD device but there is >no sunit/swidth set, >Further - what is your storage configuration (e.g. what type of MD >raid are you using) and is the filesystem correctly aligned to the >storage? If you get these wrong, then nothing else you do will >improve performance. mdraid1 over two dumb SATA disks. >and the agcount of 32 is not a default value, Right. xfsprogs had just switched from agcount=16 to agcount=4 for its default value, which at that time seemed a little uncomforting, given disks grow in size. So as much as I can recall, I only set the agcount manually to 32 (1.5T/32=46G) for when I once (like, 2007 or so) created an xfs it used 16 (250G/16=15G). >What are your mount options - perhaps you've missed the fact that >the new functionality requires the "delaylog" mount option to be >added. Per /proc/mounts: /dev/md3 / xfs rw,relatime,attr2,nobarrier,noquota 0 0 >Mind you, that is not a magic bullet - if the operation is >single threaded and CPU bound, delaylog makes no difference to >performance, and with lazy-count=0 then the superblock will still be >a major contention point and probably nullify any improvement >delaylog could provide.. The question is.. is the writeout single-threaded? Judging from there being only one xfs thread per block device, that may seem to hold. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs