On Fri, Jul 08, 2022 at 11:55:53AM +1000, Dave Chinner wrote: > lockdep_assert_held(&log->l_icloglock); > @@ -544,7 +543,7 @@ xlog_state_release_iclog( > if ((iclog->ic_state == XLOG_STATE_WANT_SYNC || > (iclog->ic_flags & XLOG_ICL_NEED_FUA)) && > !iclog->ic_header.h_tail_lsn) { > - tail_lsn = xlog_assign_tail_lsn(log->l_mp); > + xfs_lsn_t tail_lsn = atomic64_read(&log->l_tail_lsn); > iclog->ic_header.h_tail_lsn = cpu_to_be64(tail_lsn); Nit: I'd just do this as: iclog->ic_header.h_tail_lsn = cpu_to_be64(atomic64_read(&log->l_tail_lsn)); > +/* > + * Callers should pass the the original tail lsn so that we can detect if the > + * tail has moved as a result of the operation that was performed. If the caller > + * needs to force a tail LSN update, it should pass NULLCOMMITLSN to bypass the > + * "did the tail LSN change?" checks. > + */ Should we also document the old_lsn == 0 case here?