On Tue, Nov 03, 2015 at 01:18:53PM +0100, Michael Weissenbacher wrote: > Hi! > I have a XFS file system which lies on a 10-disk RAID-6 device that was > created with Chunk Size = 1MiB. > On mkfs.xfs time this was - as far as i know - specified with "-d > su=1m,sw=8". > > xfs_info shows the following: > meta-data=/dev/sdb1 isize=256 agcount=15, > agsize=268435200 blks > = sectsz=512 attr=2 > data = bsize=4096 blocks=3905945088, imaxpct=5 > = sunit=256 swidth=2048 blks > naming =version 2 bsize=4096 ascii-ci=0 > log =internal bsize=4096 blocks=521728, version=2 > = sectsz=512 sunit=8 blks, lazy-count=1 > realtime =none extsz=4096 blocks=0, rtextents=0 > > Interestingly, the sunit value of the log seems to be incorrect - as it > should be 256 too, like the sunit value of the data. I am pretty sure > the reason is that the log sunit cannot be 256 blks (=1024KiB) and > because of this mkfs.xfs did fall back to the default of 8 blks > (=32KiB). I found evidence of this in the following thread: > http://oss.sgi.com/archives/xfs/2012-06/msg00431.html > > What i want to achieve is to set the log sunit to the maximum possible > of 64 blks (=256KiB). > > - Is that even possible without doing mkfs.xfs (and losing all data)? > - Would it be an improvement performance-wise? > - Would changing to an external log help? > I don't believe there's any supported way to do this. Out of curiosity, I just tried an experiment to modify the superblock logsunit via xfs_db and run repair to zero the log. That seemed to work in terms of taking effect on the subsequent mount, but that's certainly not something I would suggest to do in production. Note that mkfs aligns the physical log based on the stripe unit as well, so it wouldn't really have the same effect anyways. Brian > tia, > Michael > > _______________________________________________ > xfs mailing list > xfs@xxxxxxxxxxx > http://oss.sgi.com/mailman/listinfo/xfs _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs