On Thu, Sep 05, 2019 at 05:15:50PM -0700, Darrick J. Wong wrote: > On Fri, Sep 06, 2019 at 10:05:48AM +1000, Dave Chinner wrote: > > From: Dave Chinner <dchinner@xxxxxxxxxx> > > > > generic/530 on a machine with enough ram and a non-preemptible > > kernel can run the AGI processing phase of log recovery enitrely out > > of cache. This means it never blocks on locks, never waits for IO > > and runs entirely through the unlinked lists until it either > > completes or blocks and hangs because it has run out of log space. > > > > It runs out of log space because the background CIL push is > > scheduled but never runs. queue_work() queues the CIL work on the > > current CPU that is busy, and the workqueue code will not run it on > > any other CPU. Hence if the unlinked list processing never yields > > the CPU voluntarily, the push work is delayed indefinitely. This > > results in the CIL aggregating changes until all the log space is > > consumed. > > > > When the log recoveyr processing evenutally blocks, the CIL flushes > > but because the last iclog isn't submitted for IO because it isn't > > full, the CIL flush never completes and nothing ever moves the log > > head forwards, or indeed inserts anything into the tail of the log, > > and hence nothing is able to get the log moving again and recovery > > hangs. > > > > There are several problems here, but the two obvious ones from > > the trace are that: > > a) log recovery does not yield the CPU for over 4 seconds, > > b) binding CIL pushes to a single CPU is a really bad idea. > > > > This patch addresses just these two aspects of the problem, and are > > suitable for backporting to work around any issues in older kernels. > > The more fundamental problem of preventing the CIL from consuming > > more than 50% of the log without committing will take more invasive > > and complex work, so will be done as followup work. > > > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > > --- > > fs/xfs/xfs_log_recover.c | 1 + > > fs/xfs/xfs_super.c | 3 ++- > > 2 files changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c > > index f05c6c99c4f3..c9665455431e 100644 > > --- a/fs/xfs/xfs_log_recover.c > > +++ b/fs/xfs/xfs_log_recover.c > > @@ -5080,6 +5080,7 @@ xlog_recover_process_iunlinks( > > while (agino != NULLAGINO) { > > agino = xlog_recover_process_one_iunlink(mp, > > agno, agino, bucket); > > + cond_resched(); > > <urk> Now I wish I'd asked for a comment explaining why we > cond_resched().... > > /* Don't let other workqueues (including the CIL ones) starve. */ That doesn't really tell us why we are doing it, just what the effect is. And it will starve more than workqueues - anything that not scheduled as a soft or hard interrupt will be held off. I'd put something like this in the comment describing xlog_recover_process_iunlinks(): .... * * If everything we touch in the agi processing loop is already in * memory, this loop can hold the cpu for a long time. It runs * without lock contention, memory allocation contention, the need * wait for IO, etc, and so will run until we either run out of * inodes to process or we run out of log space. This is bad for * latency on single CPU and non-preemptible kernels, and can * prevent other filesytem work (such as CIL pushes) from running. * This can lead to deadlocks when it runs out of log reservation * space. Hence we need to yield the CPU periodically when there is * other kernel work scheduled on this CPU to ensure other scheduled * work can run without undue latency. */ > > (Dunno if you want to respin or if I'll just end up fixing it on the way > in...) The whole comment needs reformatting, so I'll respin it.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx