Re: [PATCH md] Allow raid10 resync to happening in larger chunks.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 06, 2008 at 06:56:40PM -0400, Guy Watkins wrote:
> } -----Original Message-----
> } From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-
> } owner@xxxxxxxxxxxxxxx] On Behalf Of Keld Jørn Simonsen
> } Sent: Wednesday, August 06, 2008 5:02 AM
> } To: NeilBrown
> } Cc: linux-raid@xxxxxxxxxxxxxxx
> } Subject: Re: [PATCH md] Allow raid10 resync to happening in larger chunks.
> } 
> } Neil made this patch based on my patch to speed up raid10 resync.
> } 
> } It is a bit different, although it messes with exactly the two cinstants
> } that I also changed. One difference is that Neil intiatlly only allocates
> } 1 MiB for buffers while my patch allocates 32 MiB. For the patch to work
> } as intended it is essential that something like 32 MiB be available for
> } buffers. I do not see how that is done in Neil's case, but then I do not
> } know the code so well.  So how does it work, Neil?
> } 
> } Has your patch been tested, Neil?
> } 
> } Anyway if this i a difference between 32 MiB being available or not, I
> } think it is important that it be available at the start of the process
> } and available for the whole duration of the process. Is it a concern of
> } whether 32 Mib buffers be available? My take is that if you are running
> } raid, then you probably always have quite some memory.
> 
> Bad assumption!  My Linux firewall/router has 64MB total and works really
> well.  I have 2 disks in a RAID1.

well, well, you do not use the raid10-driver, then.

> Maybe the amount of memory could be based on a percentage of total RAM?

Anyway, if it works then Neil's patch is probably better than mine, as I
think it will aso run if 32 MiB is not availble.

Best regards
keld

> Guy
> 
> } 
> } best regards
> } keld
> } 
> } 
> } On Tue, Aug 05, 2008 at 04:17:34PM +1000, NeilBrown wrote:
> } > The raid10 resync/recovery code currently limits the amount of
> } > in-flight resync IO to 2Meg.  This was copied from raid1 where
> } > it seems quite adequate.  However for raid10, some layouts require
> } > a bit of seeking to perform a resync, and allowing a larger buffer
> } > size means that the seeking can be significantly reduced.
> } >
> } > There is probably no real need to limit the amount of in-flight
> } > IO at all.  Any shortage of memory will naturally reduce the
> } > amount of buffer space available down to a set minimum, and any
> } > concurrent normal IO will quickly cause resync IO to back off.
> } >
> } > The only problem would be that normal IO has to wait for all resync IO
> } > to finish, so a very large amount of resync IO could cause unpleasant
> } > latency when normal IO starts up.
> } >
> } > So: increase RESYNC_DEPTH to allow 32Meg of buffer (if memory is
> } > available) which seems to be a good amount.  Also reduce the amount
> } > of memory reserved as there is no need to keep 2Meg just for resync if
> } > memory is tight.
> } >
> } > Thanks to Keld for the suggestion.
> } >
> } > Cc: Keld Jørn Simonsen <keld@xxxxxxxx>
> } > Signed-off-by: NeilBrown <neilb@xxxxxxx>
> } > ---
> } >  drivers/md/raid10.c |    9 +++++----
> } >  1 files changed, 5 insertions(+), 4 deletions(-)
> } >
> } > diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> } > index d41bebb..e34cd0e 100644
> } > --- a/drivers/md/raid10.c
> } > +++ b/drivers/md/raid10.c
> } > @@ -76,11 +76,13 @@ static void r10bio_pool_free(void *r10_bio, void
> } *data)
> } >  	kfree(r10_bio);
> } >  }
> } >
> } > +/* Maximum size of each resync request */
> } >  #define RESYNC_BLOCK_SIZE (64*1024)
> } > -//#define RESYNC_BLOCK_SIZE PAGE_SIZE
> } > -#define RESYNC_SECTORS (RESYNC_BLOCK_SIZE >> 9)
> } >  #define RESYNC_PAGES ((RESYNC_BLOCK_SIZE + PAGE_SIZE-1) / PAGE_SIZE)
> } > -#define RESYNC_WINDOW (2048*1024)
> } > +/* amount of memory to reserve for resync requests */
> } > +#define RESYNC_WINDOW (1024*1024)
> } > +/* maximum number of concurrent requests, memory permitting */
> } > +#define RESYNC_DEPTH (32*1024*1024/RESYNC_BLOCK_SIZE)
> } >
> } >  /*
> } >   * When performing a resync, we need to read and compare, so
> } > @@ -690,7 +692,6 @@ static int flush_pending_writes(conf_t *conf)
> } >   *    there is no normal IO happeing.  It must arrange to call
> } >   *    lower_barrier when the particular background IO completes.
> } >   */
> } > -#define RESYNC_DEPTH 32
> } >
> } >  static void raise_barrier(conf_t *conf, int force)
> } >  {
> } > --
> } > 1.5.6.3
> } --
> } To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> } the body of a message to majordomo@xxxxxxxxxxxxxxx
> } More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux