Re: optmize librbd for iops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/12/2012 11:55 PM, Stefan Priebe wrote:
Am 13.11.2012 08:51, schrieb Josh Durgin:
On 11/12/2012 05:50 AM, Stefan Priebe - Profihost AG wrote:
Hello list,

are there any plans to optimize librbd for iops? Right now i'm able to
get 50.000 iop/s via iscsi and 100.000 iop/s using multipathing with
iscsi.

With librbd i'm stuck to around 18.000iops. As this scales with more
hosts but not with more disks in a vm. It must be limited by rbd
implementation in kvm / librbd.

It'd be interesting to see which layers are most limiting in this
case - qemu/kvm, librados, or librbd.

How does rados bench with 4k writes and then 4k reads with many
concurrent IOs do?
Right now i'm using qemu-kvm with librbd and fio inside guest. How does
the rados bench work?

rados bench uses librados aio, keeping several operations in flight.
IO size is the same as object size for it.

You can do a 4k write benchmark that doesn't delete the objects it
writes, with 32 IOs in flight for 300 seconds:

rados -p data bench 300 write -b 4096 -t 32 --no-cleanup

Then a read benchmark (only sequential is implemented, but with 4k
objects it's similar to random if you flush the osd's page cache before
running it):

rados -p data bench 300 seq -b 4096 -t 32

You can divide the avg throughput by IO size to get IOPS.

Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux