On Wed, Nov 17, 2010 at 03:49:04PM +0100, Åukasz OleÅ wrote: > Hi, > > I'm upgrading xfsprogs from 2.10.1 to the latest 3.1.4 version. I noticed that > when I'm creating large lvm volume (2T) the log size is almost 1G in the old > version it was 128M. > I know I can manipulate this value with -lsize option, but I'm wondering why > this difference is so huge? Many workloads were demonstrated to have substantially better performance with larger logs, even on small filesystems. At 4TB, most people are using RAID of some kind, so larger logs are quite beneficial here. > On this volume I have one sparse file which is exported by iSCSI Target. I > have script which calculates for me "seek" value for dd command and now it > returns me wrong values. > Can I stay with the old log size or maybe there are some good reasons to use > new values? Staying with the old log size is just fine - it'll behave exactly the same as it does now. There are two main things hat make a larger log size attractive: 1. log size determines maximum transaction parallelism, so so smaller logs may limit operational concurrency. A 128MB log typically allows ~250 concurrent transactions on a 1TB, 4k block size filesystem. 2. larger logs allow the filesystem to soak up larger burst of metadata modifications without needing to write back dirty metadata. The downside ot a larger log is that recovery can take longer after a crash. Anyway, if you are having no problems at 128MB, then just use that... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs