Re: [md PATCH 04/22] md: support barrier requests on all personalities.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17:48, NeilBrown wrote:
> When a barrier arrives, we send a zero-length barrier to every active
> device.  When that completes - and if the original request was not
> empty -  we submit the barrier request itself (with the barrier flag
> cleared) and the submit a fresh load of zero length barriers.

s/the/then

> +/*
> + * Generic barrier handling for md
> + */
> +
> +static void md_end_barrier(struct bio *bio, int err)
> +{
> +	mdk_rdev_t *rdev = bio->bi_private;
> +	mddev_t *mddev = rdev->mddev;
> +	if (err == -EOPNOTSUPP && mddev->barrier != (void*)1)

How about

	#define POST_REQUEST_BARRIER ((void*)1)

?

> +		rcu_read_lock();
> +		list_for_each_entry_rcu(rdev, &mddev->disks, same_set)
> +			if (rdev->raid_disk >= 0 &&
> +			    !test_bit(Faulty, &rdev->flags)) {
> +				/* Take two references, one is dropped
> +				 * when request finishes, one after
> +				 * we reclaim rcu_read_lock
> +				 */
> +				struct bio *bi;
> +				atomic_inc(&rdev->nr_pending);
> +				atomic_inc(&rdev->nr_pending);
> +				rcu_read_unlock();
> +				bi = bio_alloc(GFP_KERNEL, 0);
> +				bi->bi_end_io = md_end_barrier;
> +				bi->bi_private = rdev;
> +				bi->bi_bdev = rdev->bdev;
> +				atomic_inc(&mddev->flush_pending);
> +				submit_bio(WRITE_BARRIER, bi);
> +				rcu_read_lock();
> +				rdev_dec_pending(rdev, mddev);
> +			}
> +		rcu_read_unlock();

Calling atomic_inc() twice isn't an atomic operation any more. If
this doesn't matter (because all modifications of rdev->nr_pending
are supposed to happen within RCU read-side critical sections) then
why is rdev->nr_pending an atomic_t at all?

> +void md_barrier_request(mddev_t *mddev, struct bio *bio)
> +{
> +	mdk_rdev_t *rdev;
> +
> +	spin_lock_irq(&mddev->write_lock);
> +	wait_event_lock_irq(mddev->sb_wait,
> +			    !mddev->barrier,
> +			    mddev->write_lock, /*nothing*/);
> +	mddev->barrier = bio;
> +	spin_unlock_irq(&mddev->write_lock);
> +
> +	atomic_set(&mddev->flush_pending, 1);
> +	INIT_WORK(&mddev->barrier_work, md_submit_barrier);
> +
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(rdev, &mddev->disks, same_set)
> +		if (rdev->raid_disk >= 0 &&
> +		    !test_bit(Faulty, &rdev->flags)) {
> +			struct bio *bi;
> +
> +			atomic_inc(&rdev->nr_pending);
> +			atomic_inc(&rdev->nr_pending);
> +			rcu_read_unlock();
> +			bi = bio_alloc(GFP_KERNEL, 0);
> +			bi->bi_end_io = md_end_barrier;
> +			bi->bi_private = rdev;
> +			bi->bi_bdev = rdev->bdev;
> +			atomic_inc(&mddev->flush_pending);
> +			submit_bio(WRITE_BARRIER, bi);
> +			rcu_read_lock();
> +			rdev_dec_pending(rdev, mddev);
> +		}
> +	rcu_read_unlock();

This loop is identical to the one above, so it might make sense
to put it into a separate function.

Regards
Andre
-- 
The only person who always got his work done by Friday was Robinson Crusoe

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux