Re: [PATCH 3/8] xfs: prevent CIL push holdoff in log recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 05, 2019 at 08:26:44AM -0700, Darrick J. Wong wrote:
> On Thu, Sep 05, 2019 at 06:47:12PM +1000, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > generic/530 on a machine with enough ram and a non-preemptible
> > kernel can run the AGI processing phase of log recovery enitrely out
> > of cache. This means it never blocks on locks, never waits for IO
> > and runs entirely through the unlinked lists until it either
> > completes or blocks and hangs because it has run out of log space.
> > 
> > It runs out of log space because the background CIL push is
> > scheduled but never runs. queue_work() queues the CIL work on the
> > current CPU that is busy, and the workqueue code will not run it on
> > any other CPU. Hence if the unlinked list processing never yields
> > the CPU voluntarily, the push work is delayed indefinitely. This
> > results in the CIL aggregating changes until all the log space is
> > consumed.
> > 
> > When the log recoveyr processing evenutally blocks, the CIL flushes
> > but because the last iclog isn't submitted for IO because it isn't
> > full, the CIL flush never completes and nothing ever moves the log
> > head forwards, or indeed inserts anything into the tail of the log,
> > and hence nothing is able to get the log moving again and recovery
> > hangs.
> > 
> > There are several problems here, but the two obvious ones from
> > the trace are that:
> > 	a) log recovery does not yield the CPU for over 4 seconds,
> > 	b) binding CIL pushes to a single CPU is a really bad idea.
> > 
> > This patch addresses just these two aspects of the problem, and are
> > suitable for backporting to work around any issues in older kernels.
> > The more fundamental problem of preventing the CIL from consuming
> > more than 50% of the log without committing will take more invasive
> > and complex work, so will be done as followup work.
> > 
> > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> > ---
> >  fs/xfs/xfs_log_recover.c | 1 +
> >  fs/xfs/xfs_super.c       | 3 ++-
> >  2 files changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
> > index f05c6c99c4f3..c9665455431e 100644
> > --- a/fs/xfs/xfs_log_recover.c
> > +++ b/fs/xfs/xfs_log_recover.c
> > @@ -5080,6 +5080,7 @@ xlog_recover_process_iunlinks(
> >  			while (agino != NULLAGINO) {
> >  				agino = xlog_recover_process_one_iunlink(mp,
> >  							agno, agino, bucket);
> > +				cond_resched();
> 
> Funny, I encountered a similar problem in the deferred inactivation
> series where iunlinked inodes marked for inactivation pile up until we
> OOM or stall in the log.  I solved it by kicking the inactivation
> workqueue and going to sleep every ~1000 inodes.

If the workqueue had already been kicked, then yielding with
cond_resched() would probably be enough to avoid that.

I think I'm going to have a bit of a look at our use of workqueues -
I didn't realise that the default behaviour of the workqueues was
"cannot run work unless CPU is yeilded" - it kinda makes the "do
async work by workqueue" model somewhat problematic if the work
queued by a single CPU can only be run on that same CPU instead of
concurrently across all idle CPUs...

> >  			}
> >  		}
> >  		xfs_buf_rele(agibp);
> > diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
> > index f9450235533c..55a268997bde 100644
> > --- a/fs/xfs/xfs_super.c
> > +++ b/fs/xfs/xfs_super.c
> > @@ -818,7 +818,8 @@ xfs_init_mount_workqueues(
> >  		goto out_destroy_buf;
> >  
> >  	mp->m_cil_workqueue = alloc_workqueue("xfs-cil/%s",
> > -			WQ_MEM_RECLAIM|WQ_FREEZABLE, 0, mp->m_fsname);
> > +			WQ_MEM_RECLAIM|WQ_FREEZABLE|WQ_UNBOUND,
> 
> More stupid nits: spaces between the "|".

It's the same as the rest of the code in that function.

Fixed anyway.

-Dave.

-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux