Re: [PATCH 05/10] repair: factor out threading setup code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 25, 2014 at 10:16:20AM +1100, Dave Chinner wrote:
> On Mon, Feb 24, 2014 at 03:43:05PM -0500, Brian Foster wrote:
> > On Mon, Feb 24, 2014 at 05:29:24PM +1100, Dave Chinner wrote:
> > > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > > 
> > > The same code is repeated in different places to set up
> > > multithreaded prefetching. This can all be factored into a single
> > > implementation.
> > > 
> > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> ....
> > >  static void
> > >  traverse_ags(
> > > -	xfs_mount_t 		*mp)
> > > +	struct xfs_mount	*mp)
> > >  {
> > > -	int			i;
> > > -	work_queue_t		queue;
> > > -	prefetch_args_t		*pf_args[2];
> > > -
> > > -	/*
> > > -	 * we always do prefetch for phase 6 as it will fill in the gaps
> > > -	 * not read during phase 3 prefetch.
> > > -	 */
> > > -	queue.mp = mp;
> > > -	pf_args[0] = start_inode_prefetch(0, 1, NULL);
> > > -	for (i = 0; i < glob_agcount; i++) {
> > > -		pf_args[(~i) & 1] = start_inode_prefetch(i + 1, 1,
> > > -				pf_args[i & 1]);
> > > -		traverse_function(&queue, i, pf_args[i & 1]);
> > > -	}
> > > +	do_inode_prefetch(mp, 0, traverse_function, true, true);
> > 
> > The cover letter indicates the parallelization of phase 6 was dropped,
> > but this appears to (conditionally) enable it.
> 
> No, it enables prefetch, it does not enable threading. The second
> parameter is "0" which means that do_inode_prefetch() executes the
> single threaded prefetch walk like the above code. i.e.:
> 
> > > + */
> > > +void
> > > +do_inode_prefetch(
> > > +	struct xfs_mount	*mp,
> > > +	int			stride,
> 
> stride = 0
> 
> > > +	void			(*func)(struct work_queue *,
> > > +					xfs_agnumber_t, void *),
> > > +	bool			check_cache,
> > > +	bool			dirs_only)
> > > +{
> > > +	int			i, j;
> > > +	xfs_agnumber_t		agno;
> > > +	struct work_queue	queue;
> > > +	struct work_queue	*queues;
> > > +	struct prefetch_args	*pf_args[2];
> > > +
> > > +	/*
> > > +	 * If the previous phases of repair have not overflowed the buffer
> > > +	 * cache, then we don't need to re-read any of the metadata in the
> > > +	 * filesystem - it's all in the cache. In that case, run a thread per
> > > +	 * CPU to maximise parallelism of the queue to be processed.
> > > +	 */
> > > +	if (check_cache && !libxfs_bcache_overflowed()) {
> > > +		queue.mp = mp;
> > > +		create_work_queue(&queue, mp, libxfs_nproc());
> > > +		for (i = 0; i < mp->m_sb.sb_agcount; i++)
> > > +			queue_work(&queue, func, i, NULL);
> > > +		destroy_work_queue(&queue);
> > > +		return;
> > > +	}
> > > +
> > > +	/*
> > > +	 * single threaded behaviour - single prefetch thread, processed
> > > +	 * directly after each AG is queued.
> > > +	 */
> > > +	if (!stride) {
> > > +		queue.mp = mp;
> > > +		pf_args[0] = start_inode_prefetch(0, dirs_only, NULL);
> > > +		for (i = 0; i < mp->m_sb.sb_agcount; i++) {
> > > +			pf_args[(~i) & 1] = start_inode_prefetch(i + 1,
> > > +					dirs_only, pf_args[i & 1]);
> > > +			func(&queue, i, pf_args[i & 1]);
> > > +		}
> > > +		return;
> > > +	}
> 
> So we run this "!stride" code. Hmmmm - maybe you are commenting on
> the "check_cache" code? I probably should prevent that from
> triggering, too.
> 

Sorry, I could have been more clear on that. Yes, I'm referring
specifically to setting check_cache to true.

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux