Re: krbd splitting large IO's into smaller IO's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 10, 2015 at 2:47 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> Hi,
>
> Using Kernel RBD client with Kernel 4.03 (I have also tried some older
> kernels with the same effect) and IO is being split into smaller IO's which
> is having a negative impact on performance.
>
> cat /sys/block/sdc/queue/max_hw_sectors_kb
> 4096
>
> cat /sys/block/rbd0/queue/max_sectors_kb
> 4096
>
> Using DD
> dd if=/dev/rbd0 of=/dev/null bs=4M
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
> rbd0              0.00     0.00  201.50    0.00 25792.00     0.00   256.00
> 1.99   10.15   10.15    0.00   4.96 100.00
>
>
> Using FIO with 4M blocks
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
> rbd0              0.00     0.00  232.00    0.00 118784.00     0.00  1024.00
> 11.29   48.58   48.58    0.00   4.31 100.00
>
> Any ideas why IO sizes are limited to 128k (256 blocks) in DD's case and
> 512k in Fio's case?

128k vs 512k is probably buffered vs direct IO - add iflag=direct to
your dd invocation.

As for the 512k - I'm pretty sure it's a regression in our switch to
blk-mq.  I tested it around 3.18-3.19 and saw steady 4M IOs.  I hope we
are just missing a knob - I'll take a look.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux