On Wed, May 13, 2020 at 07:39:19AM +1000, Dave Chinner wrote: > On Tue, May 12, 2020 at 09:03:52AM -0700, Darrick J. Wong wrote: > > On Tue, May 12, 2020 at 12:59:49PM +1000, Dave Chinner wrote: > > > From: Dave Chinner <dchinner@xxxxxxxxxx> > > > > > > It's a global atomic counter, and we are hitting it at a rate of > > > half a million transactions a second, so it's bouncing the counter > > > cacheline all over the place on large machines. Convert it to a > > > per-cpu counter. > > > > > > And .... oh wow, that was unexpected! > > > > > > Concurrent create, 50 million inodes, identical 16p/16GB virtual > > > machines on different physical hosts. Machine A has twice the CPU > > > cores per socket of machine B: > > > > > > unpatched patched > > > machine A: 3m45s 2m27s > > > machine B: 4m13s 4m14s > > > > > > Create rates: > > > unpatched patched > > > machine A: 246k+/-15k 384k+/-10k > > > machine B: 225k+/-13k 223k+/-11k > > > > > > Concurrent rm of same 50 million inodes: > > > > > > unpatched patched > > > machine A: 8m30s 3m09s > > > machine B: 4m02s 4m51s > > > > > > The transaction rate on the fast machine went from about 250k/sec to > > > over 600k/sec, which indicates just how much of a bottleneck this > > > atomic counter was. > > > > > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > > > --- > > > fs/xfs/xfs_mount.h | 2 +- > > > fs/xfs/xfs_super.c | 12 +++++++++--- > > > fs/xfs/xfs_trans.c | 6 +++--- > > > 3 files changed, 13 insertions(+), 7 deletions(-) > > > > > > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h > > > index 712b3e2583316..af3d8b71e9591 100644 > > > --- a/fs/xfs/xfs_mount.h > > > +++ b/fs/xfs/xfs_mount.h > > > @@ -84,6 +84,7 @@ typedef struct xfs_mount { > > > * extents or anything related to the rt device. > > > */ > > > struct percpu_counter m_delalloc_blks; > > > + struct percpu_counter m_active_trans; /* in progress xact counter */ > > > > > > struct xfs_buf *m_sb_bp; /* buffer for superblock */ > > > char *m_rtname; /* realtime device name */ > > > @@ -164,7 +165,6 @@ typedef struct xfs_mount { > > > uint64_t m_resblks; /* total reserved blocks */ > > > uint64_t m_resblks_avail;/* available reserved blocks */ > > > uint64_t m_resblks_save; /* reserved blks @ remount,ro */ > > > - atomic_t m_active_trans; /* number trans frozen */ > > > struct xfs_mru_cache *m_filestream; /* per-mount filestream data */ > > > struct delayed_work m_reclaim_work; /* background inode reclaim */ > > > struct delayed_work m_eofblocks_work; /* background eof blocks > > > diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c > > > index e80bd2c4c279e..bc4853525ce18 100644 > > > --- a/fs/xfs/xfs_super.c > > > +++ b/fs/xfs/xfs_super.c > > > @@ -883,7 +883,7 @@ xfs_quiesce_attr( > > > int error = 0; > > > > > > /* wait for all modifications to complete */ > > > - while (atomic_read(&mp->m_active_trans) > 0) > > > + while (percpu_counter_sum(&mp->m_active_trans) > 0) > > > delay(100); > > > > Hmm. AFAICT, this counter stops us from quiescing the log while > > transactions are still running. We only quiesce the log for unmount, > > remount-ro, and fs freeze. Given that we now start_sb_write for > > xfs_getfsmap and the background freeing threads, I wonder, do we still > > need this at all? > > Perhaps not - I didn't look that far. It's basically only needed to > protect against XFS_TRANS_NO_WRITECOUNT transactions, which is > really just xfs_sync_sb() these days. This can come from several > places, but the only one outside of mount/freeze/unmount is the log > worker. Perhaps the log worker can be cancelled before calling > xfs_quiesce_attr() rather than after? What if we skip bumping m_active_trans for NO_WRITECOUNT transactions? There aren't that many of them, and it'd probably better for memory consumption on 1000-core systems. ;) --D > Cheers, > > Dave. > -- > Dave Chinner > david@xxxxxxxxxxxxx