Re: "This is a bug."

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 10, 2015 at 09:03:39PM +0300, Tapani Tarvainen wrote:
> On Thu, Sep 10, 2015 at 01:55:58PM -0400, Brian Foster (bfoster@xxxxxxxxxx) wrote:
> 
> > > > So that's a 6TB fs with over 24000 allocation groups of size 256MB, as
> > > > opposed to the mkfs default of 6 allocation groups of 1TB each. Is that
> > > > intentional?
> > > 
> > > Not to my knowledge. Unless I'm mistaken, the filesystem was created
> > > while the machine was running Debian Squeeze, using whatever defaults
> > > were back then.
> 
> > Strange... was the filesystem created small and then grown to a much
> > larger size via xfs_growfs?
> 
> Almost certainly yes, although how small it initially was I'm not
> sure.
> 

That probably explains that then. While growfs is obviously supported,
it's not usually a great idea to grow from something really small to
really large like this precisely because you end up with this kind of
weird geometry. mkfs tries to format the fs to an ideal default geometry
based on the current size of the device, but the allocation group size
cannot be modified once the filesystem is created. Therefore, growfs can
only add more AGs of the original size.

As a result, you end up with a 6TB filesystem with >24k allocation
groups, whereas mkfs will format a 6TB device with 6 allocation groups
by default (though I think specifying a stripe unit can tweak this). My
understanding is that this could be increased sanely on large cpu count
systems and such, but we're probably talking about going to something on
the order of 32 or 64 allocation groups as opposed to thousands.

I'd expect such a large filesystem with such small allocation groups to
probably introduce overhead in terms of metadata usage (24k agi's,
agf's, 2x free space btrees and 1x inode btree per AG), spending more
time in AG selection algorithms for allocations and whatnot, increased
fragmentation due to capping the maximum contiguous extent size,
creating more work for userspace tools such as repair, etc., and
probably to have other weird or non-obvious side effects that I'm not
familiar with.

Brian

> -- 
> Tapani Tarvainen

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux