Re: krbd splitting large IO's into smaller IO's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I found something similar awhile ago within a VM.
http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-November/045034.html
I don't know if the change suggested by Ilya ever got applied.

Cheers, Dan

On Wed, Jun 10, 2015 at 1:47 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> Hi,
>
> Using Kernel RBD client with Kernel 4.03 (I have also tried some older
> kernels with the same effect) and IO is being split into smaller IO's which
> is having a negative impact on performance.
>
> cat /sys/block/sdc/queue/max_hw_sectors_kb
> 4096
>
> cat /sys/block/rbd0/queue/max_sectors_kb
> 4096
>
> Using DD
> dd if=/dev/rbd0 of=/dev/null bs=4M
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
> rbd0              0.00     0.00  201.50    0.00 25792.00     0.00   256.00
> 1.99   10.15   10.15    0.00   4.96 100.00
>
>
> Using FIO with 4M blocks
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
> rbd0              0.00     0.00  232.00    0.00 118784.00     0.00  1024.00
> 11.29   48.58   48.58    0.00   4.31 100.00
>
> Any ideas why IO sizes are limited to 128k (256 blocks) in DD's case and
> 512k in Fio's case?
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux