On Thu, Jun 03, 2021 at 10:38:19AM +1000, Dave Chinner wrote: > On Thu, May 27, 2021 at 12:13:19PM -0700, Darrick J. Wong wrote: > > On Wed, May 19, 2021 at 10:13:13PM +1000, Dave Chinner wrote: > ..... > > > @@ -913,25 +912,23 @@ xlog_cil_push_work( > > > xlog_cil_pcp_aggregate(cil, ctx); > > > > > > list_sort(NULL, &ctx->log_items, xlog_cil_order_cmp); > > > - > > > while (!list_empty(&ctx->log_items)) { > > > struct xfs_log_item *item; > > > > > > item = list_first_entry(&ctx->log_items, > > > struct xfs_log_item, li_cil); > > > + lv = item->li_lv; > > > list_del_init(&item->li_cil); > > > item->li_order_id = 0; > > > - if (!ctx->lv_chain) > > > - ctx->lv_chain = item->li_lv; > > > - else > > > - lv->lv_next = item->li_lv; > > > - lv = item->li_lv; > > > item->li_lv = NULL; > > > - num_iovecs += lv->lv_niovecs; > > > > > > + num_iovecs += lv->lv_niovecs; > > > > Not sure why "lv = item->li_lv" needed to move up? > > > > I think the only change needed here is replacing the lv_chain/lv_next > > business with the list_add_tail? > > Yes, but someone complained about the awful diff in the next patch, > so moving the "lv = item->li_lv" made the diff in the next patch > much, much cleaner... > > <shrug> > > I can move it back to the next patch if you really want, but it's > really just shuffling deck chairs at this point... Nope, don't care that much. > > > @@ -985,8 +985,14 @@ xlog_cil_push_work( > > > * use the commit record lsn then we can move the tail beyond the grant > > > * write head. > > > */ > > > - error = xlog_write(log, &lvhdr, ctx->ticket, &ctx->start_lsn, NULL, > > > - num_bytes); > > > + error = xlog_write(log, &ctx->lv_chain, ctx->ticket, &ctx->start_lsn, > > > + NULL, num_bytes); > > > + > > > + /* > > > + * Take the lvhdr back off the lv_chain as it should not be passed > > > + * to log IO completion. > > > + */ > > > + list_del(&lvhdr.lv_list); > > > > Seems a little clunky, but I guess I see why it's needed. > > I could replace the stack structure with a memory allocation and > then we wouldn't need to care, but I'm trying to keep memory > allocation out of this fast path as much as possible.... Oh, that's much worse. > > I /think/ I don't see any place where the onstack lvhdr can escape out > > of the chain after _push_work returns, so this is safe enough. > > It can't, because we own the chain here and are completely > responsible for cleaning it up on failure. Ok. I think I'm satisfied now: Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx> --D > > Cheers, > > Dave. > -- > Dave Chinner > david@xxxxxxxxxxxxx