On Wed, Sep 18, 2019 at 11:38:18AM -0700, Darrick J. Wong wrote: > On Mon, Sep 16, 2019 at 08:16:25AM -0400, Brian Foster wrote: > > The upcoming allocation algorithm update searches multiple > > allocation btree cursors concurrently. As such, it requires an > > active state to track when a particular cursor should continue > > searching. While active state will be modified based on higher level > > logic, we can define base functionality based on the result of > > allocation btree lookups. > > > > Define an active flag in the private area of the btree cursor. > > Update it based on the result of lookups in the existing allocation > > btree helpers. Finally, provide a new helper to query the current > > state. > > > > Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx> > > --- > > fs/xfs/libxfs/xfs_alloc.c | 24 +++++++++++++++++++++--- > > fs/xfs/libxfs/xfs_alloc_btree.c | 1 + > > fs/xfs/libxfs/xfs_btree.h | 3 +++ > > 3 files changed, 25 insertions(+), 3 deletions(-) > > > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c > > index 533b04aaf6f6..512a45888e06 100644 > > --- a/fs/xfs/libxfs/xfs_alloc.c > > +++ b/fs/xfs/libxfs/xfs_alloc.c > > @@ -146,9 +146,13 @@ xfs_alloc_lookup_eq( > > xfs_extlen_t len, /* length of extent */ > > int *stat) /* success/failure */ > > { > > + int error; > > + > > cur->bc_rec.a.ar_startblock = bno; > > cur->bc_rec.a.ar_blockcount = len; > > - return xfs_btree_lookup(cur, XFS_LOOKUP_EQ, stat); > > + error = xfs_btree_lookup(cur, XFS_LOOKUP_EQ, stat); > > + cur->bc_private.a.priv.abt.active = *stat == 1 ? true : false; > > I think "cur->bc_private.a.priv.abt.active = (*stat == 1);" would have > sufficed for these, right? (Yeah, sorry, picking at nits here...) > Sure, I'll fix those up. Brian > --D > > > + return error; > > } > > > > /* > > @@ -162,9 +166,13 @@ xfs_alloc_lookup_ge( > > xfs_extlen_t len, /* length of extent */ > > int *stat) /* success/failure */ > > { > > + int error; > > + > > cur->bc_rec.a.ar_startblock = bno; > > cur->bc_rec.a.ar_blockcount = len; > > - return xfs_btree_lookup(cur, XFS_LOOKUP_GE, stat); > > + error = xfs_btree_lookup(cur, XFS_LOOKUP_GE, stat); > > + cur->bc_private.a.priv.abt.active = *stat == 1 ? true : false; > > + return error; > > } > > > > /* > > @@ -178,9 +186,19 @@ xfs_alloc_lookup_le( > > xfs_extlen_t len, /* length of extent */ > > int *stat) /* success/failure */ > > { > > + int error; > > cur->bc_rec.a.ar_startblock = bno; > > cur->bc_rec.a.ar_blockcount = len; > > - return xfs_btree_lookup(cur, XFS_LOOKUP_LE, stat); > > + error = xfs_btree_lookup(cur, XFS_LOOKUP_LE, stat); > > + cur->bc_private.a.priv.abt.active = *stat == 1 ? true : false; > > + return error; > > +} > > + > > +static inline bool > > +xfs_alloc_cur_active( > > + struct xfs_btree_cur *cur) > > +{ > > + return cur && cur->bc_private.a.priv.abt.active; > > } > > > > /* > > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c > > index 2a94543857a1..279694d73e4e 100644 > > --- a/fs/xfs/libxfs/xfs_alloc_btree.c > > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c > > @@ -507,6 +507,7 @@ xfs_allocbt_init_cursor( > > > > cur->bc_private.a.agbp = agbp; > > cur->bc_private.a.agno = agno; > > + cur->bc_private.a.priv.abt.active = false; > > > > if (xfs_sb_version_hascrc(&mp->m_sb)) > > cur->bc_flags |= XFS_BTREE_CRC_BLOCKS; > > diff --git a/fs/xfs/libxfs/xfs_btree.h b/fs/xfs/libxfs/xfs_btree.h > > index ced1e65d1483..b4e3ec1d7ff9 100644 > > --- a/fs/xfs/libxfs/xfs_btree.h > > +++ b/fs/xfs/libxfs/xfs_btree.h > > @@ -183,6 +183,9 @@ union xfs_btree_cur_private { > > unsigned long nr_ops; /* # record updates */ > > int shape_changes; /* # of extent splits */ > > } refc; > > + struct { > > + bool active; /* allocation cursor state */ > > + } abt; > > }; > > > > /* > > -- > > 2.20.1 > >