Re: krbd splitting large IO's into smaller IO's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dan,

I found your post last night, it does indeed look like the default has been
set to 4096 for the Kernel RBD client in the 4.0 kernel. I also checked a
machine running 3.16 and this had 512 as the default.

However in my case there seems to be something else which is affecting the
max block size. 

This originally stemmed from me trying to use flashcache as a small
writeback cache for RBD's to improve sequential write latency. My workload
submits all IO as 64kb so sequential write speed tops out around 15MB/s. The
idea being that a small flashcache block device should be able to take these
small IO's and then spit them out as large 4MB blocks to Ceph, dramatically
increasing throughput. However with this limitation, I'm not seeing the
gains I expect.

Nick

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Dan van der Ster
> Sent: 10 June 2015 13:24
> To: Nick Fisk
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  krbd splitting large IO's into smaller IO's
> 
> Hi,
> 
> I found something similar awhile ago within a VM.
> http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-
> November/045034.html
> I don't know if the change suggested by Ilya ever got applied.
> 
> Cheers, Dan
> 
> On Wed, Jun 10, 2015 at 1:47 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> > Hi,
> >
> > Using Kernel RBD client with Kernel 4.03 (I have also tried some older
> > kernels with the same effect) and IO is being split into smaller IO's
> > which is having a negative impact on performance.
> >
> > cat /sys/block/sdc/queue/max_hw_sectors_kb
> > 4096
> >
> > cat /sys/block/rbd0/queue/max_sectors_kb
> > 4096
> >
> > Using DD
> > dd if=/dev/rbd0 of=/dev/null bs=4M
> >
> > Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz
> > avgqu-sz   await r_await w_await  svctm  %util
> > rbd0              0.00     0.00  201.50    0.00 25792.00     0.00
256.00
> > 1.99   10.15   10.15    0.00   4.96 100.00
> >
> >
> > Using FIO with 4M blocks
> > Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz
> > avgqu-sz   await r_await w_await  svctm  %util
> > rbd0              0.00     0.00  232.00    0.00 118784.00     0.00
1024.00
> > 11.29   48.58   48.58    0.00   4.31 100.00
> >
> > Any ideas why IO sizes are limited to 128k (256 blocks) in DD's case
> > and 512k in Fio's case?
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux