On Fri, Dec 17, 2021 at 09:45:00AM -0800, Darrick J. Wong wrote: > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > While I was running with KASAN and lockdep enabled, I stumbled upon an > KASAN report about a UAF to a freed CIL checkpoint. Looking at the > comment for xfs_log_item_in_current_chkpt, it seems pretty obvious to me > that the original patch to xfs_defer_finish_noroll should have done > something to lock the CIL to prevent it from switching the CIL contexts > while the predicate runs. > > For upper level code that needs to know if a given log item is new > enough not to need relogging, add a new wrapper that takes the CIL > context lock long enough to sample the current CIL context. This is > kind of racy in that the CIL can switch the contexts immediately after > sampling, but that's ok because the consequence is that the defer ops > code is a little slow to relog items. .... > > Fixes: 4e919af7827a ("xfs: periodically relog deferred intent items") > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx> > --- > v2: use READ_ONCE on CIL instead of taking locks to get cil ctx > --- > fs/xfs/xfs_log_cil.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c > index 6c93c8ada6f3..b59cc9c0961c 100644 > --- a/fs/xfs/xfs_log_cil.c > +++ b/fs/xfs/xfs_log_cil.c > @@ -1442,9 +1442,9 @@ xlog_cil_force_seq( > */ > bool > xfs_log_item_in_current_chkpt( > - struct xfs_log_item *lip) > + struct xfs_log_item *lip) > { > - struct xfs_cil_ctx *ctx = lip->li_mountp->m_log->l_cilp->xc_ctx; > + struct xfs_cil *cil = lip->li_mountp->m_log->l_cilp; > > if (list_empty(&lip->li_cil)) > return false; > @@ -1454,7 +1454,7 @@ xfs_log_item_in_current_chkpt( > * first checkpoint it is written to. Hence if it is different to the > * current sequence, we're in a new checkpoint. > */ > - return lip->li_seq == ctx->sequence; > + return lip->li_seq == READ_ONCE(cil->xc_current_sequence); > } Looks good. Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx> -- Dave Chinner david@xxxxxxxxxxxxx