Re: Unacceptably Poor RAID1 Performance with Many CPU Cores

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> wrote:
> > diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> > index 4fcfcb350d2b..52f0c24128ff 100644
> > --- a/drivers/md/raid10.c
> > +++ b/drivers/md/raid10.c
> > @@ -905,7 +905,7 @@ static void flush_pending_writes(struct r10conf *conf)
> >   		/* flush any pending bitmap writes to disk
> >   		 * before proceeding w/ I/O */
> >   		md_bitmap_unplug(conf->mddev->bitmap);
> > -		wake_up(&conf->wait_barrier);
> > +		wake_up_barrier(conf);
> >   
> >   		while (bio) { /* submit pending writes */
> >   			struct bio *next = bio->bi_next;
> 
> Thanks for the testing, sorry that I missed one place... Can you try to
> change wake_up() to wake_up_barrier() from raid10_unplug() and test
> again?

OK.  I replaced only the second occurrence of wake_up() in raid10_unplug().

> > Without the patch:
> > READ:  IOPS=2033k BW=8329MB/s
> > WRITE: IOPS= 871k BW=3569MB/s
> > 
> > With the patch:
> > READ:  IOPS=2027K BW=7920MiB/s
> > WRITE: IOPS= 869K BW=3394MiB/s

With the second patch:
READ:  IOPS=3642K BW=13900MiB/s
WRITE: IOPS=1561K BW= 6097MiB/s

That is impressive.  Great job.

I shall test it more.

Thanks,
Ali




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux