On Thu, Aug 15, 2019 at 08:55:35AM -0400, Brian Foster wrote: > The upcoming allocation algorithm update searches multiple > allocation btree cursors concurrently. As such, it requires an > active state to track when a particular cursor should continue > searching. While active state will be modified based on higher level > logic, we can define base functionality based on the result of > allocation btree lookups. > > Define an active flag in the private area of the btree cursor. > Update it based on the result of lookups in the existing allocation > btree helpers. Finally, provide a new helper to query the current > state. > > Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx> > --- > fs/xfs/libxfs/xfs_alloc.c | 24 +++++++++++++++++++++--- > fs/xfs/libxfs/xfs_alloc_btree.c | 1 + > fs/xfs/libxfs/xfs_btree.h | 3 +++ > 3 files changed, 25 insertions(+), 3 deletions(-) > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c > index 372ad55631fc..6340f59ac3f4 100644 > --- a/fs/xfs/libxfs/xfs_alloc.c > +++ b/fs/xfs/libxfs/xfs_alloc.c > @@ -146,9 +146,13 @@ xfs_alloc_lookup_eq( > xfs_extlen_t len, /* length of extent */ > int *stat) /* success/failure */ > { > + int error; > + > cur->bc_rec.a.ar_startblock = bno; > cur->bc_rec.a.ar_blockcount = len; > - return xfs_btree_lookup(cur, XFS_LOOKUP_EQ, stat); > + error = xfs_btree_lookup(cur, XFS_LOOKUP_EQ, stat); > + cur->bc_private.a.priv.abt.active = *stat; <urk> Not really a fan of mixing types (even if they are bool and int), how hard would it be to convert some of these *stat to bool? Does abt.active have a use outside of the struct xfs_alloc_cur in the next patch? --D > + return error; > } > > /* > @@ -162,9 +166,13 @@ xfs_alloc_lookup_ge( > xfs_extlen_t len, /* length of extent */ > int *stat) /* success/failure */ > { > + int error; > + > cur->bc_rec.a.ar_startblock = bno; > cur->bc_rec.a.ar_blockcount = len; > - return xfs_btree_lookup(cur, XFS_LOOKUP_GE, stat); > + error = xfs_btree_lookup(cur, XFS_LOOKUP_GE, stat); > + cur->bc_private.a.priv.abt.active = *stat; > + return error; > } > > /* > @@ -178,9 +186,19 @@ xfs_alloc_lookup_le( > xfs_extlen_t len, /* length of extent */ > int *stat) /* success/failure */ > { > + int error; > cur->bc_rec.a.ar_startblock = bno; > cur->bc_rec.a.ar_blockcount = len; > - return xfs_btree_lookup(cur, XFS_LOOKUP_LE, stat); > + error = xfs_btree_lookup(cur, XFS_LOOKUP_LE, stat); > + cur->bc_private.a.priv.abt.active = *stat; > + return error; > +} > + > +static inline bool > +xfs_alloc_cur_active( > + struct xfs_btree_cur *cur) > +{ > + return cur && cur->bc_private.a.priv.abt.active; > } > > /* > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c > index 2a94543857a1..279694d73e4e 100644 > --- a/fs/xfs/libxfs/xfs_alloc_btree.c > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c > @@ -507,6 +507,7 @@ xfs_allocbt_init_cursor( > > cur->bc_private.a.agbp = agbp; > cur->bc_private.a.agno = agno; > + cur->bc_private.a.priv.abt.active = false; > > if (xfs_sb_version_hascrc(&mp->m_sb)) > cur->bc_flags |= XFS_BTREE_CRC_BLOCKS; > diff --git a/fs/xfs/libxfs/xfs_btree.h b/fs/xfs/libxfs/xfs_btree.h > index fa3cd8ab9aba..a66063c356cc 100644 > --- a/fs/xfs/libxfs/xfs_btree.h > +++ b/fs/xfs/libxfs/xfs_btree.h > @@ -183,6 +183,9 @@ union xfs_btree_cur_private { > unsigned long nr_ops; /* # record updates */ > int shape_changes; /* # of extent splits */ > } refc; > + struct { > + bool active; /* allocation cursor state */ > + } abt; > }; > > /* > -- > 2.20.1 >