On Fri, Apr 16, 2021 at 05:10:23PM +0800, Gao Xiang wrote: > There are many paths which could trigger xfs_log_sb(), e.g. > xfs_bmap_add_attrfork() > -> xfs_log_sb() > , which overrided on-disk fdblocks by in-core per-CPU fdblocks. > > However, for !lazysbcount cases, on-disk fdblocks is actually updated > by xfs_trans_apply_sb_deltas(), and generally it isn't equal to > in-core fdblocks due to xfs_reserve_block() or whatever, see the > comment in xfs_unmountfs(). > > It could be observed by the following steps reported by Zorro [1]: > > 1. mkfs.xfs -f -l lazy-count=0 -m crc=0 $dev > 2. mount $dev $mnt > 3. fsstress -d $mnt -p 100 -n 1000 (maybe need more or less io load) > 4. umount $mnt > 5. xfs_repair -n $dev > > yet due to commit f46e5a174655("xfs: fold sbcount quiesce logging > into log covering"), > ... xfs_sync_sb() will be triggered even !lazysbcount > but xfs_log_need_covered() case when xfs_unmountfs(), so hard to > reproduce on kernel 5.12+. I think this could be rephrased, but I am not native english-speaker either, so I can't say much. Maybe... "xfs_sync_sb() will be triggered if no log covering is needed and !lazysbcount." > Reported-by: Zorro Lang <zlang@xxxxxxxxxx> > Signed-off-by: Gao Xiang <hsiangkao@xxxxxxxxxx> > --- > fs/xfs/libxfs/xfs_sb.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c > index 60e6d255e5e2..423dada3f64c 100644 > --- a/fs/xfs/libxfs/xfs_sb.c > +++ b/fs/xfs/libxfs/xfs_sb.c > @@ -928,7 +928,13 @@ xfs_log_sb( > > mp->m_sb.sb_icount = percpu_counter_sum(&mp->m_icount); > mp->m_sb.sb_ifree = percpu_counter_sum(&mp->m_ifree); > - mp->m_sb.sb_fdblocks = percpu_counter_sum(&mp->m_fdblocks); > + if (!xfs_sb_version_haslazysbcount(&mp->m_sb)) { > + struct xfs_dsb *dsb = bp->b_addr; > + > + mp->m_sb.sb_fdblocks = be64_to_cpu(dsb->sb_fdblocks); > + } else { > + mp->m_sb.sb_fdblocks = percpu_counter_sum(&mp->m_fdblocks); > + } The patch looks good to me, feel free to add: Reviewed-by: Carlos Maiolino <cmaiolino@xxxxxxxxxx> -- Carlos