Re: [PATCH V3 2/8] md: add io accounting for raid0 and raid5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 25, 2021 at 05:46:17PM +0800, Guoqing Jiang wrote:
> We introduce a new bioset (io_acct_set) for raid0 and raid5 since they
> don't own clone infrastructure to accounting io. And the bioset is added
> to mddev instead of to raid0 and raid5 layer, because with this way, we
> can put common functions to md.h and reuse them in raid0 and raid5.
> 
> Also struct md_io_acct is added accordingly which includes io start_time,
> the origin bio and cloned bio. Then we can call bio_{start,end}_io_acct
> to get related io status.
> 
> Signed-off-by: Guoqing Jiang <jiangguoqing@xxxxxxxxxx>
> ---
>  drivers/md/md.c    | 44 +++++++++++++++++++++++++++++++++++++++++++-
>  drivers/md/md.h    |  8 ++++++++
>  drivers/md/raid0.c |  3 +++
>  drivers/md/raid5.c |  9 +++++++++
>  4 files changed, 63 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 7ba00e4c862d..87786f180525 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -2340,7 +2340,8 @@ int md_integrity_register(struct mddev *mddev)
>  			       bdev_get_integrity(reference->bdev));
>  
>  	pr_debug("md: data integrity enabled on %s\n", mdname(mddev));
> -	if (bioset_integrity_create(&mddev->bio_set, BIO_POOL_SIZE)) {
> +	if (bioset_integrity_create(&mddev->bio_set, BIO_POOL_SIZE) ||
> +	    bioset_integrity_create(&mddev->io_acct_set, BIO_POOL_SIZE)) {

Don't we need to create this new only for raid0 and raid5?
Shouldn't they call helpers to create it?

> @@ -5864,6 +5866,12 @@ int md_run(struct mddev *mddev)
>  		if (err)
>  			return err;
>  	}
> +	if (!bioset_initialized(&mddev->io_acct_set)) {
> +		err = bioset_init(&mddev->io_acct_set, BIO_POOL_SIZE,
> +				  offsetof(struct md_io_acct, bio_clone), 0);
> +		if (err)
> +			return err;
> +	}

Can someone explain why we are having these bioset_initialized checks
here (also for the existing one)?  This just smells like very sloppy
life time rules.

> +/* used by personalities (raid0 and raid5) to account io stats */

Instead of mentioning the personalities this migt better explain
something like ".. by personalities that don't already clone the
bio and thus can't easily add the timestamp to their extended bio
structure"

> +void md_account_bio(struct mddev *mddev, struct bio **bio)
> +{
> +	struct md_io_acct *md_io_acct;
> +	struct bio *clone;
> +
> +	if (!blk_queue_io_stat((*bio)->bi_bdev->bd_disk->queue))
> +		return;
> +
> +	clone = bio_clone_fast(*bio, GFP_NOIO, &mddev->io_acct_set);
> +	md_io_acct = container_of(clone, struct md_io_acct, bio_clone);
> +	md_io_acct->orig_bio = *bio;
> +	md_io_acct->start_time = bio_start_io_acct(*bio);
> +
> +	clone->bi_end_io = md_end_io_acct;
> +	clone->bi_private = md_io_acct;
> +	*bio = clone;

I would find a calling conventions that returns the allocated clone
(or the original bio if there is no accounting) more logical.

> +	struct bio_set			io_acct_set; /* for raid0 and raid5 io accounting */

crazy long line.



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux