Richard Scobie put forth on 9/10/2010 4:42 PM: > In the future this lv will be grown in multiples of 256K chunk, 16 > drive RAID6 arrays, so am I correct in thinking that the sunit/swidth > parameter can stay the same as it is expanded? What is the reasoning behind adding so many terabytes under a single filesystem? Do you _need_ all of it under a single mount point? If not, or even if you do, for many reasons, it may very well be better to put a single filesystem directly on each RAID6 array without using LVM in the middle and simply mount each filesystem at a different point, say: /data /data/array1 /data/array2 /data/array3 /data/array4 This method can minimize damage and downtime when an entire array is knocked offline. We just had a post yesterday where a SATA cable was kicked loose and took 5 drives down of a 15 drive md RAID6 set, killing the entire filesystem. If that OP had setup 3x5 drive arrays with 3 filesystems, the system could have continued to run in a degraded fashion, depending on his application data layout across the filesystems. If done properly, you lose an app or two, not all of them. This method also eliminates xfs_growfs performance issues such as what you're describing because you're never changing the filesystem layout when adding new arrays to the system. In summary, every layer of complexity added to the storage stack increases the probability of failure. As my grandmother was fond of saying, "Don't put all of your eggs in one basket." It was salient advice on the farm 80 years ago, and it's salient advice today with high technology. -- Stan _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs