Re: xfs_growfs / planned resize / performance impact

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/5/2012 8:49 AM, Martin Steigerwald wrote:
> Am Sonntag, 5. August 2012 schrieb Stan Hoeppner:
>> On 8/5/2012 6:03 AM, Martin Steigerwald wrote:
>>> Well the default was 16 AGs for volumes < 2 TiB AFAIR. And it has
>>> been reduced to 4 for as I remember exactly performance reasons. Too
>>> many AGs on a single device can incur too much parallelity. Thats at
>>> least is what I have understood back then.
>>
>> For striped md/RAID or LVM volumes mkfs.xfs will create 16 AGs by
>> default because it reads the configuration and finds a striped volume.
>> The theory here is that more AGs offers better performance in the
>> average case on a striped volume.
>>
>> With hardware RAID or a single drive, or any storage configuration for
>> which mkfs.xfs is unable to query the parameters, mkfs.xfs creates 4
>> AGs by default.  The 4 AG default has been with us for a very long
>> time.  It was never reduced.
> 
> That does not match my memory, but I´d have to look it up. Maybe next 
> week.
> 
> I am pretty sure mkfs.xfs on a single partition on a single harddisk upto 
> 2 TiB used 16 AGs for quite some time and now uses 4 AGs since quite some 
> time already. I think I have noted the exact xfsprogs version where it was 
> changed in my training slides.

>From 'man mkfs.xfs' of xfsprogs 3.1.4 (probably not the latest)

"The data section of the filesystem is divided into _value_ allocation
groups (default value is scaled automatically based on the underlying
device size)."

It's not stated in man but the minimum is 4 AGs, unless that has changed
in the last couple of years.  This is what I was referring to previously
when I stated 4 AGs is the default.

What you likely did was format a 2TB device and saw 16 AGs due to the
automatic scaling, then shortly thereafter formatted a much smaller
device and saw the default minimum 4 AGs.  Assuming agcount was
statically defined, you assumed the default value had been decreased.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux