Re: [PATCH v6 6/7] xfs: support shrinking unused space in the last AG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Brian,

On Thu, Feb 04, 2021 at 07:33:03AM -0500, Brian Foster wrote:
> On Thu, Feb 04, 2021 at 03:02:17AM +0800, Gao Xiang wrote:

....

> > > 
> > > Long question:
> > > 
> > > The reason why we use (nb - dblocks) is because growfs is an all or
> > > nothing operation -- either we succeed in writing new empty AGs and
> > > inflating the (former) last AG of the fs, or we don't do anything at
> > > all.  We don't allow partial growing; if we did, then delta would be
> > > relevant here.  I think we get away with not needing to run transactions
> > > for each AG because those new AGs are inaccessible until we commit the
> > > new agcount/dblocks, right?
> > > 
> > > In your design for the fs shrinker, do you anticipate being able to
> > > eliminate all the eligible AGs in a single transaction?  Or do you
> > > envision only tackling one AG at a time?  And can we be partially
> > > successful with a shrink?  e.g. we succeed at eliminating the last AG,
> > > but then the one before that isn't empty and so we bail out, but by that
> > > point we did actually make the fs a little bit smaller.
> > 
> > Thanks for your question. I'm about to sleep, I might try to answer
> > your question here.
> > 
> > As for my current experiement / understanding, I think eliminating all
> > the empty AGs + shrinking the tail AG in a single transaction is possible,
> > that is what I'm done for now;
> >  1) check the rest AGs are empty (from the nagcount AG to the oagcount - 1
> >     AG) and mark them all inactive (AGs freezed);
> >  2) consume an extent from the (nagcount - 1) AG;
> >  3) decrease the number of agcount from oagcount to nagcount.
> > 
> > Both 2) and 3) can be done in the same transaction, and after 1) the state
> > of such empty AGs is fixed as well. So on-disk fs and runtime states are
> > all in atomic.
> > 
> > > 
> > > There's this comment at the bottom of xfs_growfs_data() that says that
> > > we can return error codes if the secondary sb update fails, even if the
> > > new size is already live.  This convinces me that it's always been the
> > > case that callers of the growfs ioctl are supposed to re-query the fs
> > > geometry afterwards to find out if the fs size changed, even if the
> > > ioctl itself returns an error... which implies that partial grow/shrink
> > > are a possibility.
> > > 
> > 
> > I didn't realize that possibility but if my understanding is correct
> > the above process is described as above so no need to use incremental
> > shrinking by its design. But it also support incremental shrinking if
> > users try to use the ioctl for multiple times.
> > 
> 
> This was one of the things I wondered about on an earlier versions of
> this work; whether we wanted to shrink to be deliberately incremental or
> not. I suspect that somewhat applies to even this version without AG
> truncation because technically we could allocate as much as possible out
> of end of the last AG and shrink by that amount. My initial thought was
> that if the implementation is going to be opportunistic (i.e., we
> provide no help to actually free up targeted space), perhaps an
> incremental implementation is a useful means to allow the operation to
> make progress. E.g., run a shrink, observe it didn't fully complete,
> shuffle around some files, repeat, etc. 
> 
> IIRC, one of the downsides of that sort of approach is any use case
> where the goal is an underlying storage device resize. I suppose an
> underlying device resize could also be opportunistic, but it seems more
> likely to me that use case would prefer an all or nothing approach,
> particularly if associated userspace tools don't really know how to
> handle a partially successful fs shrink. Do we have any idea how other
> tools/fs' behave in this regard (I thought ext4 supported shrink)? FWIW,
> it also seems potentially annoying to ask for a largish shrink only for
> the tool to hand back something relatively tiny.
> 
> Based on your design description, it occurs to me that perhaps the ideal
> outcome is an implementation that supports a fully atomic all-or-nothing
> shrink (assuming this is reasonably possible), but supports an optional
> incremental mode specified by the interface. IOW, if we have the ability
> to perform all-or-nothing, then it _seems_ like a minor interface
> enhancement to support incremental on top of that as opposed to the
> other way around. Therefore, perhaps that should be the initial goal
> until shown to be too complex or otherwise problematic..?
> 

I cannot say too much of this, yet my current observation is that
shrinking tail empty AG [+ empty AGs (optional)] in one transaction
is practical (I don't see any barrier so far [1]). I'm implementing
an atomic all-or-nothing truncation and userspace can utilize it to
implement in all-or-nothing way (I saw Dave's spaceman work before) or
incremental way (by using binary search approach and multiple ioctls)...
In principle, supporting the ioctl with the extra partial shrinking
feature is practial as well (but additional work might need to be
done). And also, I'm not sure it's user-friendly since most end-users
might want an all-or-nothing shrinking (at least in the fs truncation
step) result.

btw, afaik (my limited understanding), Ext4 shrinking is an offline
approach so it's somewhat easier to implement (no need to consider
any runtime impact), which is also considered as an all-or-nothing
truncation as well (Although it also supports -M to shrink the
filesystem to the minimum size, I think it can be implemented by
multiple all-or-nothing shrink ioctls...)

Thanks,
Gao Xiang

[1] it's somewhat outdated yet I'd like to finish this tail AG patchset
first
https://git.kernel.org/pub/scm/linux/kernel/git/xiang/linux.git/log/?h=xfs/shrink2

> Brian
>




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux