Re: [PATCH] xfs_repair: multithread phase 2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 04, 2011 at 05:02:40AM -0500, Christoph Hellwig wrote:
> > This patch uses 32-way threading which results in no noticable
> > slowdown on single SATA drives with NCQ, but results in ~10x
> > reduction in runtime on a 12 disk RAID-0 array.
> 
> Shouldn't we have at least an option to allow tuning this value,
> similar to the ag_stride?  In fact I wonder why phase 3/4 should
> use different values for it than phase2.

Phase 3/4/5 use agressive prefetch to try to maximise throughput,
while phase 2 has no prefetch and uses synchronous reads.
Effectively the use of lots of parallelism simply keeps multiple IOs
in flight rather than reading them one at a time, hence reducing the
effective IO latency.

> 
> > @@ -75,8 +80,10 @@ scan_sbtree(
> >  				xfs_agblock_t		bno,
> >  				xfs_agnumber_t		agno,
> >  				int			suspect,
> > -				int			isroot),
> > -	int		isroot)
> > +				int			isroot,
> > +				struct aghdr_cnts	*agcnts),
> > +	int		isroot,
> > +	struct aghdr_cnts *agcnts)
> 
> Please make this a
> 
> 	void *priv
> 
> to keep scan_sbtree generic.

OK.

> >   * Scan an AG for obvious corruption.
> >   *
> >   * Note: This code is not reentrant due to the use of global variables.
> 
> That's not true any more I think.

Good point.

> > +#define SCAN_THREADS 32
> > +
> > +void
> > +scan_ags(
> > +	struct xfs_mount	*mp)
> > +{
> > +	struct aghdr_cnts agcnts[mp->m_sb.sb_agcount];
> > +	pthread_t	thr[SCAN_THREADS];
> > +	__uint64_t	fdblocks = 0;
> > +	__uint64_t	icount = 0;
> > +	__uint64_t	ifreecount = 0;
> > +	int		i, j, err;
> > +
> > +	/*
> > +	 * scan a few AGs in parallel. The scan is IO latency bound,
> > +	 * so running a few at a time will speed it up significantly.
> > +	 */
> > +	for (i = 0; i < mp->m_sb.sb_agcount; i += SCAN_THREADS) {
> 
> I think this should use the workqueues from repair/threads.c.  Just
> create a workqueue with 32 threads, and then enqueue all the AGs.

Ok. I just used an API I'm familiar with and didn't have to think
about.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux