Re: [PATCH v2.1 1/3] xfs: increase the default parallelism levels of pwork clients

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 27, 2021 at 07:46:12AM +1100, Dave Chinner wrote:
> On Mon, Jan 25, 2021 at 09:04:52PM -0800, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@xxxxxxxxxx>
> > 
> > Increase the parallelism level for pwork clients to the workqueue
> > defaults so that we can take advantage of computers with a lot of CPUs
> > and a lot of hardware.  On fast systems this will speed up quotacheck by
> > a large factor, and the following posteof/cowblocks cleanup series will
> > use the functionality presented in this patch to run garbage collection
> > as quickly as possible.
> > 
> > We do this by switching the pwork workqueue to unbounded, since the
> > current user (quotacheck) runs lengthy scans for each work item and we
> > don't care about dispatching the work on a warm cpu cache or anything
> > like that.  Also set WQ_SYSFS so that we can monitor where the wq is
> > running.
> > 
> > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx>
> > ---
> > v2.1: document the workqueue knobs, kill the nr_threads argument to
> > pwork, and convert it to unbounded all in one patch
> > ---
> >  Documentation/admin-guide/xfs.rst |   33 +++++++++++++++++++++++++++++++++
> >  fs/xfs/xfs_iwalk.c                |    5 +----
> >  fs/xfs/xfs_pwork.c                |   25 +++++--------------------
> >  fs/xfs/xfs_pwork.h                |    4 +---
> >  4 files changed, 40 insertions(+), 27 deletions(-)
> > 
> > diff --git a/Documentation/admin-guide/xfs.rst b/Documentation/admin-guide/xfs.rst
> > index 86de8a1ad91c..5fd14556c6fe 100644
> > --- a/Documentation/admin-guide/xfs.rst
> > +++ b/Documentation/admin-guide/xfs.rst
> > @@ -495,3 +495,36 @@ the class and error context. For example, the default values for
> >  "metadata/ENODEV" are "0" rather than "-1" so that this error handler defaults
> >  to "fail immediately" behaviour. This is done because ENODEV is a fatal,
> >  unrecoverable error no matter how many times the metadata IO is retried.
> > +
> > +Workqueue Concurrency
> > +=====================
> > +
> > +XFS uses kernel workqueues to parallelize metadata update processes.  This
> > +enables it to take advantage of storage hardware that can service many IO
> > +operations simultaneously.
> > +
> > +The control knobs for a filesystem's workqueues are organized by task at hand
> > +and the short name of the data device.  They all can be found in:
> > +
> > +  /sys/bus/workqueue/devices/${task}!${device}
> > +
> > +================  ===========
> > +  Task            Description
> > +================  ===========
> > +  xfs_iwalk-$pid  Inode scans of the entire filesystem. Currently limited to
> > +                  mount time quotacheck.
> > +================  ===========
> > +
> > +For example, the knobs for the quotacheck workqueue for /dev/nvme0n1 would be
> > +found in /sys/bus/workqueue/devices/xfs_iwalk-1111!nvme0n1/.
> > +
> > +The interesting knobs for XFS workqueues are as follows:
> > +
> > +============     ===========
> > +  Knob           Description
> > +============     ===========
> > +  max_active     Maximum number of background threads that can be started to
> > +                 run the work.
> > +  cpumask        CPUs upon which the threads are allowed to run.
> > +  nice           Relative priority of scheduling the threads.  These are the
> > +                 same nice levels that can be applied to userspace processes.
> 
> I'd suggest that a comment be added here along the lines of:
> 
> This interface exposes an internal implementation detail of XFS, and
> as such is explicitly not part of any userspace API/ABI guarantee
> the kernel may give userspace. These are undocumented features of
> the generic workqueue implementation XFS uses for concurrency, and
> they are provided here purely for diagnostic and tuning purposes and
> may change at any time in the future.
> 
> Otherwise looks ok.

Done, thanks for taking a look at this.

--D

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux