Re: Unacceptably Poor RAID1 Performance with Many CPU Cores

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Yu Kuai <yukuai3@xxxxxxxxxx> wrote:
> > 
> > On md0 (40GB):
> > READ:  IOPS=1563K BW=6109MiB/s
> > WRITE: IOPS= 670K BW=2745MiB/s
> > 
> > On md3 (14TB):
> > READ:  IOPS=1177K BW=4599MiB/s
> > WRITE: IOPS= 505K BW=1972MiB/s
> > 
> > On md3 but disabling mdadm bitmap (mdadm --grow --bitmap=none /dev/md3):
> > READ:  IOPS=1351K BW=5278MiB/s
> > WRITE: IOPS= 579K BW=2261MiB/s
> 
> Currently, if bitmap is enabled, a bitmap level spinlock will be grabbed
> for each write, and sadly this will require a huge refactor to improve
> performance.

OK.

> > +   95.25%     0.00%  fio      [unknown]               [k] 0xffffffffffffffff
> > +   95.00%     0.00%  fio      fio                     [.] 0x000055e073fcd117
> > +   93.68%     0.13%  fio      [kernel.kallsyms]       [k] entry_SYSCALL_64_after_hwframe
> > +   93.54%     0.03%  fio      [kernel.kallsyms]       [k] do_syscall_64
> > +   92.38%     0.03%  fio      libc.so.6               [.] syscall
> > +   92.18%     0.00%  fio      fio                     [.] 0x000055e073fcaceb
> > +   92.18%     0.08%  fio      fio                     [.] td_io_queue
> > +   92.04%     0.02%  fio      fio                     [.] td_io_commit
> > +   91.76%     0.00%  fio      fio                     [.] 0x000055e073fefe5e
> > -   91.76%     0.05%  fio      libaio.so.1.0.2         [.] io_submit
> >     - 91.71% io_submit
> >        - 91.69% syscall
> >           - 91.58% entry_SYSCALL_64_after_hwframe
> >              - 91.55% do_syscall_64
> >                 - 91.06% __x64_sys_io_submit
> >                    - 90.93% io_submit_one
> >                       - 48.85% aio_write
> >                          - 48.77% ext4_file_write_iter
> >                             - 39.86% iomap_dio_rw
> >                                - 39.85% __iomap_dio_rw
> >                                   - 22.55% blk_finish_plug
> >                                      - 22.55% __blk_flush_plug
> >                                         - 21.67% raid10_unplug
> >                                            - 16.54% submit_bio_noacct_nocheck
> >                                               - 16.44% blk_mq_submit_bio
> >                                                  - 16.17% __rq_qos_throttle
> >                                                     - 16.01% wbt_wait
> 
> You can disable wbt to prevent overhead here.

Very good.  I will give it a try.  And thanks for your time.

Thanks,
Ali




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux