Re: [PATCH 7/8 v2] xfs: journal IO cache flush reductions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 25, 2021 at 09:58:42AM +0100, Christoph Hellwig wrote:
> > As a result:
> > 
> > 	logbsize	fsmark create rate	rm -rf
> > before	32kb		152851+/-5.3e+04	5m28s
> > patched	32kb		221533+/-1.1e+04	5m24s
> > 
> > before	256kb		220239+/-6.2e+03	4m58s
> > patched	256kb		228286+/-9.2e+03	5m06s
> > 
> > The rm -rf times are included because I ran them, but the
> > differences are largely noise. This workload is largely metadata
> > read IO latency bound and the changes to the journal cache flushing
> > doesn't really make any noticable difference to behaviour apart from
> > a reduction in noiclog events from background CIL pushing.
> 
> The 256b rm -rf case actually seems like a regression not in the noise
> here.  Does this reproduce over multiple runs?

It's noise. The unlink repeat times on this machine at 16 threads
are at least +/-15s because the removals are not synchronised in
groups like the creates are.

These are CPU bound workloads when the log is not limiting the
transaction rate (only the {before, 32kB} numbers in this test are
log IO bound) so there's always some variation in performance due to
non-deterministic factors like memory reclaim, AG lock-stepping
between threads, etc.

Hence there's a bit of unfairness between the threads and often the
first thread finishes 30s before the last thread. The times are for
the last thread completing and there can be significant variation on
that.

> > @@ -2009,13 +2010,14 @@ xlog_sync(
> >  	 * synchronously here; for an internal log we can simply use the block
> >  	 * layer state machine for preflushes.
> >  	 */
> > -	if (log->l_targ != log->l_mp->m_ddev_targp || split) {
> > +	if (log->l_targ != log->l_mp->m_ddev_targp ||
> > +	    (split && (iclog->ic_flags & XLOG_ICL_NEED_FLUSH))) {
> >  		xfs_flush_bdev(log->l_mp->m_ddev_targp->bt_bdev);
> > -		need_flush = false;
> > +		iclog->ic_flags &= ~XLOG_ICL_NEED_FLUSH;
> 
> Once you touch all the buffer flags anyway we should optimize the
> log wraparound case here - insteaad of th synchronous flush we just
> need to set REQ_PREFLUSH on the first log bio, which should be nicely
> doable with your infrastruture.

That sounds like another patch because it's a change of behaviour.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux