Re: [PATCH v2] xfs: replace global xfslogd wq with per-mount xfs-iodone wq

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 12, 2014 at 02:37:51PM -0500, Brian Foster wrote:
> On Wed, Nov 12, 2014 at 10:12:46AM +1100, Dave Chinner wrote:
> > On Tue, Nov 11, 2014 at 11:13:13AM -0500, Brian Foster wrote:
> > > The xfslogd workqueue is a global, single-job workqueue for buffer ioend
> > > processing. This means we allow for a single work item at a time for all
> > > possible XFS mounts on a system. fsstress testing in loopback XFS over
> > > XFS configurations has reproduced xfslogd deadlocks due to the single
> > > threaded nature of the queue and dependencies introduced between the
> > > separate XFS instances by online discard (-o discard).
....
> > > I've left the wq in xfs_mount rather than moved to the buftarg in this
> > > version due to the questions expressed here:
> > > 
> > > http://oss.sgi.com/archives/xfs/2014-11/msg00117.html
> > 
> > <sigh>
> > 
> > Another email from you that hasn't reached my inbox. That's two in a
> > week now, I think.
> > 
> > > ... particularly around the potential creation of multiple (of what is
> > > now) max_active=1 queues per-fs.
> > 
> > So concern #1 is that it splits log buffer versus metadata buffer
> > processing to different work queues causing concurrent processing.
...
> > The xfslogd workqueue is tagged with WQ_HIGHPRI only to expedite the
> > log buffer io completions over XFS data io completions that may get
> > stuck waiting on log forces. i.e. the xfslogd_workqueue needs
> > higher priority than m_data_workqueue and m_unwritten_workqueue as
> > they can require log forces to complete their work. Hence if we
> > separate out the log buffer io completion processing from the
> > metadata IO completion processing we don't need to process all the
> > metadata buffer IO completion as high priority work anymore.
> > 
> 
> Ok, thanks. I didn't notice an explicit relationship between either of
> those queues and xfslogd. Is the dependency implicit in that those
> queues do transaction reservations, and thus can push on the log via the
> AIL (and if so, why wouldn't the cil queue be higher priority as well)?

IIRC, the main dependency problem we found had to do with data IO
completion on a loop device getting stuck waiting log IO completion
on the backing device which was stuck in a dispatch queue behind
more blocked completions on the loop device. using WQ_HIGHPRI meant
they didn't get stuck in dispatch queues behind other queued work -
they got dispatched immeidately....

> > Concern #2 is about the reason for max_active=1 and being
> > unclear as to why we only want a single completion active at a
> > time on a CPU.  The reason for this is that most metadata and
> > log buffer IO completion work does not sleep - they only ever
> > take spinlocks and so there are no built in schedule points
> > during work processing. Hence it is rare to need a second worker
> > thread to process the queue because the first is blocked on a
> > sleeping lock and so max-active=1 makes sense. In comparison,
> > the data/unwritten io completion processing is very different
> > due to needing to take sleeping inode locks, buffer locks, etc)
> > and hence they use the wq default for max active (512).
> > 
> 
> Ok, max_active sounds more like a hint to the workqueue
> infrastructure in this usage. E.g., there's no hard rule against
> activation of more than one item, it's just of questionable value.

Right. ISTR that there were worse lock contention problems on the
AIL and iclog locks when more concurency was introduced, so it was
just kept down to the minimum required.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux