Re: [patch 2/3 v3] raid1: read balance chooses idlest disk for SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



nice =) very very nice =)
maybe to get better than this... select the disk with min(pending time)

time could be estimated with something like:
(distance * time/distance unit) + (blocks to read/write * time to
read/write 1 block) + (non sequencial penalty time)

for ssd:
    time/distance unit  = 0
    time to read/write 1 block = must test with each device
    non sequencial penalty time = must test with each device, but some
tests show that non sequencial are near to sequencial reads
for hd:
    time/distance unit and time to read/write 1 block are proportional
with disk speed (rpm)
    non sequencial penalty time is proportional to distance and head
position, many disk specs show in specs that it´s take, in worst case,
near 10ms to start reading/writing, this time is the time to disk spin
one revolution and put head in right position
        check that time to read/write in rotational change with block
position and number of heads reading (blocks at center of disk are
slower, blocks far from center are faster)
         for ssd it´s change with allocation 'problems', (for write)
if a block is 'trimmed' it´s very fast, if block is used (dirty) it
must read block, change and write, this is a slower... in others
words... the time to read is related with position and mean disk/ssd
read/write times (for a 'good' aproximation not a ideal one...), this
algorithm (without pending information) give me 1% of mean improvement
in kernel 2.6.33 (must check but i think that´s right)



2012/7/1 Shaohua Li <shli@xxxxxxxxxx>
>
> SSD hasn't spindle, distance between requests means nothing. And the original
> distance based algorithm sometimes can cause severe performance issue for SSD
> raid.
>
> Considering two thread groups, one accesses file A, the other access file B.
> The first group will access one disk and the second will access the other disk,
> because requests are near from one group and far between groups. In this case,
> read balance might keep one disk very busy but the other relative idle.  For
> SSD, we should try best to distribute requests to as more disks as possible.
> There isn't spindle move penality anyway.
>
> With below patch, I can see more than 50% throughput improvement sometimes
> depending on workloads.
>
> The only exception is small requests can be merged to a big request which
> typically can drive higher throughput for SSD too. Such small requests are
> sequential reads. Unlike hard disk, sequential read which can't be merged (for
> example direct IO, or read without readahead) can be ignored for SSD. Again
> there is no spindle move penality. readahead dispatches small requests and such
> requests can be merged.
>
> Last patch can help detect sequential read well, at least if concurrent read
> number isn't greater than raid disk number. In that case, distance based
> algorithm doesn't work well too.
>
> V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
> random IO too. This makes the algorithm generic for raid with SSD.
>
> Signed-off-by: Shaohua Li <shli@xxxxxxxxxxxx>
> ---
>  drivers/md/raid1.c |   23 +++++++++++++++++++++--
>  1 file changed, 21 insertions(+), 2 deletions(-)
>
> Index: linux/drivers/md/raid1.c
> ===================================================================
> --- linux.orig/drivers/md/raid1.c       2012-06-28 16:56:20.846401902 +0800
> +++ linux/drivers/md/raid1.c    2012-06-29 14:13:23.856781798 +0800
> @@ -486,6 +486,7 @@ static int read_balance(struct r1conf *c
>         int best_disk;
>         int i;
>         sector_t best_dist;
> +       unsigned int min_pending;
>         struct md_rdev *rdev;
>         int choose_first;
>
> @@ -499,6 +500,7 @@ static int read_balance(struct r1conf *c
>         sectors = r1_bio->sectors;
>         best_disk = -1;
>         best_dist = MaxSector;
> +       min_pending = -1;
>         best_good_sectors = 0;
>
>         if (conf->mddev->recovery_cp < MaxSector &&
> @@ -511,6 +513,8 @@ static int read_balance(struct r1conf *c
>                 sector_t dist;
>                 sector_t first_bad;
>                 int bad_sectors;
> +               bool nonrot;
> +               unsigned int pending;
>
>                 int disk = i;
>                 if (disk >= conf->raid_disks)
> @@ -573,17 +577,32 @@ static int read_balance(struct r1conf *c
>                 } else
>                         best_good_sectors = sectors;
>
> +               nonrot = blk_queue_nonrot(bdev_get_queue(rdev->bdev));
> +               pending = atomic_read(&rdev->nr_pending);
>                 dist = abs(this_sector - conf->mirrors[disk].head_position);
>                 if (choose_first
>                     /* Don't change to another disk for sequential reads */
>                     || conf->mirrors[disk].next_seq_sect == this_sector
>                     || dist == 0
>                     /* If device is idle, use it */
> -                   || atomic_read(&rdev->nr_pending) == 0) {
> +                   || pending == 0) {
>                         best_disk = disk;
>                         break;
>                 }
> -               if (dist < best_dist) {
> +
> +               /*
> +                * If all disks are rotational, choose the closest disk. If any
> +                * disk is non-rotational, choose the disk with less pending
> +                * request even the disk is rotational, which might/might not
> +                * be optimal for raids with mixed ratation/non-rotational
> +                * disks depending on workload.
> +                */
> +               if (nonrot || min_pending != -1) {
> +                       if (min_pending > pending) {
> +                               min_pending = pending;
> +                               best_disk = disk;
> +                       }
> +               } else if (dist < best_dist) {
>                         best_dist = dist;
>                         best_disk = disk;
>                 }
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux