Re: [PATCH 4/4] xfs: convert xfsbufd to use a workqueue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -STATIC void xfs_buf_delwri_queue(xfs_buf_t *, int);
> +STATIC void xfs_buf_delwri_queue(xfs_buf_t *bp, int unlock);

spuriously changing this prototype just makes merging with other
pending changes in this are harder :P

>  /*
> + * If we are doing a forced flush, then we need to wait for the IO that we
> + * issue to complete.
>   */
> +static void
> +xfs_buf_delwri_work(
> +	struct work_struct *work)
>  {
> +	struct xfs_buftarg *btp = container_of(to_delayed_work(work),
> +					struct xfs_buftarg, bt_delwrite_work);
> +	struct xfs_buf	*bp;
> +	struct blk_plug	plug;
>  	LIST_HEAD(tmp_list);
>  	LIST_HEAD(wait_list);
> +	long		age = xfs_buf_age_centisecs * msecs_to_jiffies(10);
> +	int		force = 0;
>  
> +	force = test_and_clear_bit(XBT_FORCE_FLUSH, &btp->bt_flags);
>  
> +	xfs_buf_delwri_split(btp, &tmp_list, age, force);
>  	list_sort(NULL, &tmp_list, xfs_buf_cmp);
>  
>  	blk_start_plug(&plug);
>  	while (!list_empty(&tmp_list)) {
>  		bp = list_first_entry(&tmp_list, struct xfs_buf, b_list);
> -		ASSERT(target == bp->b_target);
>  		list_del_init(&bp->b_list);
> -		if (wait) {
> +		if (force) {
>  			bp->b_flags &= ~XBF_ASYNC;
>  			list_add(&bp->b_list, &wait_list);
>  		}
> @@ -1634,7 +1577,7 @@ xfs_flush_buftarg(
>  	}
>  	blk_finish_plug(&plug);
>  
> +	if (force) {
>  		/* Wait for IO to complete. */
>  		while (!list_empty(&wait_list)) {
>  			bp = list_first_entry(&wait_list, struct xfs_buf, b_list);
> @@ -1645,7 +1588,48 @@ xfs_flush_buftarg(
>  		}
>  	}
>  

> +/*
> + *	Handling of buffer targets (buftargs).
> + */

I think we can just kill this comment.

> +/*
> + * Flush all the queued buffer work, then flush any remaining dirty buffers
> + * and wait for them to complete. If there are buffers remaining on the delwri
> + * queue, then they were pinned so couldn't be flushed. Return a value of 1 to
> + * indicate that there were pinned buffers and the caller needs to retry the
> + * flush.
> + */

Not directly related to your patch, but only one caller ever checks the
return value and retries.  This means e.g. during sync or umount we
don't bother with trying to push pinned buffers.

> +int
> +xfs_flush_buftarg(
> +	xfs_buftarg_t	*target,

Please use the non-typedef version of new or largely changed code.

> index 13188df..a3d1784 100644
> --- a/fs/xfs/xfs_trans_ail.c
> +++ b/fs/xfs/xfs_trans_ail.c
> @@ -494,7 +494,7 @@ xfs_ail_worker(
>  
>  	if (push_xfsbufd) {
>  		/* we've got delayed write buffers to flush */
> -		wake_up_process(mp->m_ddev_targp->bt_task);
> +		flush_delayed_work(&mp->m_ddev_targp->bt_delwrite_work);

This is a huge change in behaviour.  wake_up_process just kicks the
thread to wakeup from sleep as soon as the schedule selects it, while
flush_delayed_work does not only queue a pending delayed work, but also
waits for it to finish.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux