Re: [PATCH 1/2] raid5-cache: use a bio_set

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 03 2015, Christoph Hellwig wrote:

> This allows us to make guaranteed forward progress.
>
> Signed-off-by: Christoph Hellwig <hch@xxxxxx>
> ---
>  drivers/md/raid5-cache.c | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
> index 668e973..ef59564 100644
> --- a/drivers/md/raid5-cache.c
> +++ b/drivers/md/raid5-cache.c
> @@ -34,6 +34,12 @@
>  #define RECLAIM_MAX_FREE_SPACE (10 * 1024 * 1024 * 2) /* sector */
>  #define RECLAIM_MAX_FREE_SPACE_SHIFT (2)
>  
> +/*
> + * We only need 2 bios per I/O unit to make progress, but ensure we
> + * have a few more available to not get too tight.
> + */
> +#define R5L_POOL_SIZE	1024
> +

I'm really suspicious of big pool sizes.
The memory allocated to the pool is almost never used - only where no
other memory is available - so large pools are largely wasted.

As you say, we need 2 bios per unit, and unit submission is serialized
(by ->io_mutex) so '2' really should be enough.  For the very brief
periods when there is no other memory, there will only be one or two
units in flight at once, but as each one gets us closer to freeing real
memory, that shouldn't last long.

I can easily justify '4' as "double buffering" is a well understood
technique, but 1024 just seems like gratuitous waste.

If you have performance numbers that tell me I'm wrong I'll stand
corrected, but without evidence I much prefer a smaller number.

Otherwise I really like the change.

Thanks,
NeilBrown


>  struct r5l_log {
>  	struct md_rdev *rdev;
>  
> @@ -70,6 +76,7 @@ struct r5l_log {
>  	struct bio flush_bio;
>  
>  	struct kmem_cache *io_kc;
> +	struct bio_set *bs;
>  
>  	struct md_thread *reclaim_thread;
>  	unsigned long reclaim_target;	/* number of space that need to be
> @@ -248,7 +255,7 @@ static void r5l_submit_current_io(struct r5l_log *log)
>  
>  static struct bio *r5l_bio_alloc(struct r5l_log *log)
>  {
> -	struct bio *bio = bio_kmalloc(GFP_NOIO | __GFP_NOFAIL, BIO_MAX_PAGES);
> +	struct bio *bio = bio_alloc_bioset(GFP_NOIO, BIO_MAX_PAGES, log->bs);
>  
>  	bio->bi_rw = WRITE;
>  	bio->bi_bdev = log->rdev->bdev;
> @@ -1153,6 +1160,10 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
>  	if (!log->io_kc)
>  		goto io_kc;
>  
> +	log->bs = bioset_create(R5L_POOL_SIZE, 0);
> +	if (!log->bs)
> +		goto io_bs;
> +
>  	log->reclaim_thread = md_register_thread(r5l_reclaim_thread,
>  						 log->rdev->mddev, "reclaim");
>  	if (!log->reclaim_thread)
> @@ -1170,6 +1181,8 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
>  error:
>  	md_unregister_thread(&log->reclaim_thread);
>  reclaim_thread:
> +	bioset_free(log->bs);
> +io_bs:
>  	kmem_cache_destroy(log->io_kc);
>  io_kc:
>  	kfree(log);
> @@ -1179,6 +1192,7 @@ io_kc:
>  void r5l_exit_log(struct r5l_log *log)
>  {
>  	md_unregister_thread(&log->reclaim_thread);
> +	bioset_free(log->bs);
>  	kmem_cache_destroy(log->io_kc);
>  	kfree(log);
>  }
> -- 
> 1.9.1

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux