Re: Unacceptably Poor RAID1 Performance with Many CPU Cores

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 16, 2023 at 1:38 AM Ali Gholami Rudi <aligrudi@xxxxxxxxx> wrote:
>
>
> Ali Gholami Rudi <aligrudi@xxxxxxxxx> wrote:
> > Xiao Ni <xni@xxxxxxxxxx> wrote:
> > > Because it can be reproduced easily in your environment. Can you try
> > > with the latest upstream kernel? If the problem doesn't exist with
> > > latest upstream kernel. You can use git bisect to find which patch can
> > > fix this problem.
> >
> > I just tried the upstream.  I get almost the same result with 1G ramdisks.
> >
> > Without RAID (writing to /dev/ram0)
> > READ:  IOPS=15.8M BW=60.3GiB/s
> > WRITE: IOPS= 6.8M BW=27.7GiB/s
> >
> > RAID1 (writing to /dev/md/test)
> > READ:  IOPS=518K BW=2028MiB/s
> > WRITE: IOPS=222K BW= 912MiB/s

Hi Ali

I can reproduce this with upstream kernel too.

RAID1
READ: bw=3699MiB/s (3879MB/s)
WRITE: bw=1586MiB/s (1663MB/s)

ram disk:
READ: bw=5720MiB/s (5997MB/s)
WRITE: bw=2451MiB/s (2570MB/s)

There is a performance problem. But not like your result. Your result
has a huge gap. I'm not sure the reason. Any thoughts?


>
> And this is perf's output:

I'm not familiar with perf, what's your command that I can use to see
the same output?

Regards
Xiao
>
> +   98.73%     0.01%  fio      [kernel.kallsyms]       [k] entry_SYSCALL_64_after_hwframe
> +   98.63%     0.01%  fio      [kernel.kallsyms]       [k] do_syscall_64
> +   97.28%     0.01%  fio      [kernel.kallsyms]       [k] __x64_sys_io_submit
> -   97.09%     0.01%  fio      [kernel.kallsyms]       [k] io_submit_one
>    - 97.08% io_submit_one
>       - 53.58% aio_write
>          - 53.42% blkdev_write_iter
>             - 35.28% blk_finish_plug
>                - flush_plug_callbacks
>                   - 35.27% raid1_unplug
>                      - flush_bio_list
>                         - 17.88% submit_bio_noacct_nocheck
>                            - 17.88% __submit_bio
>                               - 17.61% raid1_end_write_request
>                                  - 17.47% raid_end_bio_io
>                                     - 17.41% __wake_up_common_lock
>                                        - 17.38% _raw_spin_lock_irqsave
>                                             native_queued_spin_lock_slowpath
>                         - 17.35% __wake_up_common_lock
>                            - 17.31% _raw_spin_lock_irqsave
>                                 native_queued_spin_lock_slowpath
>             + 18.07% __generic_file_write_iter
>       - 43.00% aio_read
>          - 42.64% blkdev_read_iter
>             - 42.37% __blkdev_direct_IO_async
>                - 41.40% submit_bio_noacct_nocheck
>                   - 41.34% __submit_bio
>                      - 40.68% raid1_end_read_request
>                         - 40.55% raid_end_bio_io
>                            - 40.35% __wake_up_common_lock
>                               - 40.28% _raw_spin_lock_irqsave
>                                    native_queued_spin_lock_slowpath
> +   95.19%     0.32%  fio      fio                     [.] thread_main
> +   95.08%     0.00%  fio      [unknown]               [.] 0xffffffffffffffff
> +   95.03%     0.00%  fio      fio                     [.] run_threads
> +   94.77%     0.00%  fio      fio                     [.] do_io (inlined)
> +   94.65%     0.16%  fio      fio                     [.] td_io_queue
> +   94.65%     0.11%  fio      libc-2.31.so            [.] syscall
> +   94.54%     0.07%  fio      fio                     [.] fio_libaio_commit
> +   94.53%     0.05%  fio      fio                     [.] td_io_commit
> +   94.50%     0.00%  fio      fio                     [.] io_u_submit (inlined)
> +   94.47%     0.04%  fio      libaio.so.1.0.1         [.] io_submit
> +   92.48%     0.02%  fio      [kernel.kallsyms]       [k] _raw_spin_lock_irqsave
> +   92.48%     0.00%  fio      [kernel.kallsyms]       [k] __wake_up_common_lock
> +   92.46%    92.32%  fio      [kernel.kallsyms]       [k] native_queued_spin_lock_slowpath
> +   76.85%     0.03%  fio      [kernel.kallsyms]       [k] submit_bio_noacct_nocheck
> +   76.76%     0.00%  fio      [kernel.kallsyms]       [k] __submit_bio
> +   60.25%     0.06%  fio      [kernel.kallsyms]       [k] __blkdev_direct_IO_async
> +   58.12%     0.11%  fio      [kernel.kallsyms]       [k] raid_end_bio_io
> ..
>
> Thanks,
> Ali
>





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux