On Wed, Oct 30, 2019 at 09:37:11PM +0800, Pingfan Liu wrote: > xc_cil_lock is not enough to protect the integrity of a trans logging. > Taking the scenario: > cpuA cpuB cpuC > > xlog_cil_insert_format_items() > > spin_lock(&cil->xc_cil_lock) > link transA's items to xc_cil, > including item1 > spin_unlock(&cil->xc_cil_lock) > xlog_cil_push() fetches transA's item under xc_cil_lock > issue transB, modify item1 > xlog_write(), but now, item1 contains content from transB and we have a broken transA TL;DR: 1. log vectors. 2. CIL context lock exclusion. When CPU A formats the item during commit, it copies all the changes into a list of log vectors, and that is attached to the log item and the item is added to the CIL. The item is then unlocked. This is done with the CIL context lock held excluding CIL pushes. When CPU C pushes on the CIL, it detatches the -log vectors- from the log item and removes the item from the CIL. This is done hold the CIL context lock, excluding transaction commits from modifying the CIL log vector list. It then formats the -log vectors- into the journal by passing them to xlog_write(). It does not use log items for this, and because the log vector list has been isolated and is now private to the push context, we don't need to hold any locks anymore to call xlog_write.... When CPU B modifies item1, it modifies the item and logs the new changes to the log item. It does not modify the log vector that might be attached to the log item from a previous change. The log vector is only updated during transaction commit, so the changes being made in transaction on CPU B are private to that transaction until they are committed, formatted into log vectors and inserted into the CIL under the CIL context lock. > Survive this race issue by putting under the protection of xc_ctx_lock. > Meanwhile the xc_cil_lock can be dropped as xc_ctx_lock does it against > xlog_cil_insert_items() > > Signed-off-by: Pingfan Liu <kernelfans@xxxxxxxxx> > Cc: "Darrick J. Wong" <darrick.wong@xxxxxxxxxx> > Cc: Brian Foster <bfoster@xxxxxxxxxx> > To: linux-xfs@xxxxxxxxxxxxxxx > Cc: linux-fsdevel@xxxxxxxxxxxxxxx > --- > fs/xfs/xfs_log_cil.c | 35 +++++++++++++++++++---------------- > 1 file changed, 19 insertions(+), 16 deletions(-) > > diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c > index 004af09..f8df3b5 100644 > --- a/fs/xfs/xfs_log_cil.c > +++ b/fs/xfs/xfs_log_cil.c > @@ -723,22 +723,6 @@ xlog_cil_push( > */ > lv = NULL; > num_iovecs = 0; > - spin_lock(&cil->xc_cil_lock); > - while (!list_empty(&cil->xc_cil)) { > - struct xfs_log_item *item; > - > - item = list_first_entry(&cil->xc_cil, > - struct xfs_log_item, li_cil); > - list_del_init(&item->li_cil); > - if (!ctx->lv_chain) > - ctx->lv_chain = item->li_lv; > - else > - lv->lv_next = item->li_lv; > - lv = item->li_lv; > - item->li_lv = NULL; > - num_iovecs += lv->lv_niovecs; > - } > - spin_unlock(&cil->xc_cil_lock); > > /* > * initialise the new context and attach it to the CIL. Then attach > @@ -783,6 +767,25 @@ xlog_cil_push( > up_write(&cil->xc_ctx_lock); ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We don't hold the CIL context lock anymore.... > > /* > + * cil->xc_cil_lock around this loop can be dropped, since xc_ctx_lock > + * protects us against xlog_cil_insert_items(). > + */ > + while (!list_empty(&cil->xc_cil)) { > + struct xfs_log_item *item; > + > + item = list_first_entry(&cil->xc_cil, > + struct xfs_log_item, li_cil); > + list_del_init(&item->li_cil); > + if (!ctx->lv_chain) > + ctx->lv_chain = item->li_lv; > + else > + lv->lv_next = item->li_lv; > + lv = item->li_lv; > + item->li_lv = NULL; > + num_iovecs += lv->lv_niovecs; > + } So this is completely unserialised now. i.e. even if there was a problem like you suggest, this modification doesn't do what you say it does. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx