Re: [RFC PATCH v4 0/3] xfs: more unlinked inode list optimization v4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Darrick,

On Tue, Aug 18, 2020 at 05:53:34PM -0700, Darrick J. Wong wrote:
> On Tue, Aug 18, 2020 at 09:30:12PM +0800, Gao Xiang wrote:
> > Hi forks,
> > 
> > This is RFC v4 version which is based on Dave's latest patchset:
> >  https://lore.kernel.org/r/20200812092556.2567285-1-david@xxxxxxxxxxxxx
> 
> As we already discussed on IRC, please send new revisions of patchsets
> as a separate thread from the old submission.

Okay, will definitely do later.

> 
> > I didn't send out v3 because it was based on Dave's previous RFC
> > patchset, but I'm still not quite sure to drop RFC tag since this
> > version is different from the previous versions...
> 
> Hm, this cover letter could use some tidying up, since it took me a bit
> of digging to figure out that yes, this is the successor of the old
> series that tried to get the AGI buffer lock out of the way if we're
> adding a newly unlinked inode to the end of the unlinked list.

I'm trying to sort out in the next revision if something shows unclear
in the code.

I talked with Dave person-to-person weeks ago about these constraints
on IRC, but I'm not sure I can write out some fluent formal words...

I think there are 2 independent things:
 1) avoiding taking AGI buffer lock if AGI buffer is untouched;
 2) adding a newly unlinked inode to the end of the unlinked list.

So, 2) can be achieved as well without 1) since AGI buffer lock is a
powerful lock and can be recursively taken, but if we'd like to add a
new per-AG iunlink lock, there are some new constraints (locking order
and deadlock concerns) than the current approach.

In summary, due to many exist paths (e.g. tmpfile path), we need to take
lock in the following order:
  AGI buffer lock -> per-AG iunlink lock.

Otherwise it could cause dead lock. And we cannot release per-AG iunlink
lock before all iunlink operations in this trans are commited, or it could
cause iunlink fs corruption...

> 
> > Changes since v2:
> >  - rebase on new patchset, and omit the original first patch
> >    "xfs: arrange all unlinked inodes into one list" since it now
> >    has better form in the base patchset;
> > 
> >  - a tail xfs_inode pointer is no longer needed since the original
> >    patchset introduced list_head iunlink infrastructure and it can
> >    be used to get the tail inode;
> > 
> >  - take pag_iunlink_mutex lock until all iunlink log items are
> >    committed. Otherwise, xfs_iunlink_log() order would not be equal
> >    to the trans commit order so it can mis-reorder and cause metadata
> >    corruption I mentioned in v2.
> > 
> >    In order to archive that, some recursive count is introduced since
> >    there could be several iunlink operations in one transaction,
> >    and introduce some per-AG fields as well since these operations
> >    in the transaction may not operate inodes in the same AG. we may
> >    also need to take AGI buffer lock in advance (e.g. whiteout rename
> >    path) due to iunlink operations and locking order constraint.
> >    For more details, see related inlined comments as well...
> > 
> >  - "xfs: get rid of unused pagi_unlinked_hash" would be better folded
> >    into original patchset since pagi_unlinked_hash is no longer needed.
> > 
> > ============
> > 
> > [Original text]
> > 
> > This RFC patchset mainly addresses the thoughts [*] and [**] from Dave's
> > original patchset,
> > https://lore.kernel.org/r/20200623095015.1934171-1-david@xxxxxxxxxxxxx
> > 
> > In short, it focues on the following ideas mentioned by Dave:
> >  - use bucket 0 instead of multiple buckets since in-memory double
> >    linked list finally works;
> > 
> >  - avoid taking AGI buffer and unnecessary AGI update if possible, so
> >    1) add a new lock and keep proper locking order to avoid deadlock;
> >    2) insert a new unlinked inode from the tail instead of head;
> > 
> > In addition, it's worth noticing 3 things:
> >  - xfs_iunlink_remove() should support old multiple buckets in order
> >    to keep old inode unlinked list (old image) working when recovering.
> > 
> >  - (but) OTOH, the old kernel recovery _shouldn't_ work with new image
> >    since the bucket_index from old xfs_iunlink_remove() is generated
> >    by the old formula (rather than keep in xfs_inode), which is now
> >    fixed as 0. So this feature is not forward compatible without some
> >    extra backport patches;
> 
> Oh?  These seem like serious limitations, are they still true?

Yeah, I think that's still true (I tested on my VM before).

Thanks,
Gao Xiang

> 
> --D
> 
> >  - a tail xfs_inode pointer is also added in the perag, which keeps 
> >    track of the tail of bucket 0 since it's mainly used for xfs_iunlink().
> > 
> > 
> > The git tree is also available at
> > git://git.kernel.org/pub/scm/linux/kernel/git/xiang/linux.git tags/xfs/iunlink_opt_v4
> > 
> > Gitweb:
> > https://git.kernel.org/pub/scm/linux/kernel/git/xiang/linux.git/log/?h=xfs/iunlink_opt_v4
> > 
> > 
> > Some preliminary tests are done (including fstests, but there seems
> > some pre-exist failures and I haven't looked into yet). And I confirmed
> > there was no previous metadata corruption mentioned in RFC v2 anymore.
> > 
> > To confirm that I'm in the right direction, I post the latest version
> > now since it haven't been updated for a while.
> > 
> > Comments and directions are welcomed. :)
> > 
> > Thanks,
> > Gao Xiang
> > 
> > Gao Xiang (3):
> >   xfs: get rid of unused pagi_unlinked_hash
> >   xfs: introduce perag iunlink lock
> >   xfs: insert unlinked inodes from tail
> > 
> >  fs/xfs/xfs_inode.c        | 194 ++++++++++++++++++++++++++++++++------
> >  fs/xfs/xfs_inode.h        |   1 +
> >  fs/xfs/xfs_iunlink_item.c |  16 ++++
> >  fs/xfs/xfs_mount.c        |   4 +
> >  fs/xfs/xfs_mount.h        |  14 +--
> >  5 files changed, 193 insertions(+), 36 deletions(-)
> > 
> > -- 
> > 2.18.1
> > 
> 




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux