Re: rbd performance issue - can't find bottleneck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 06/18/2015 12:54 PM, Alexandre DERUMIER wrote:
Hi,

for read benchmark

with fio, what is the iodepth ?

my fio 4k randr results with

iodepth=1 : bw=6795.1KB/s, iops=1698
iodepth=2 : bw=14608KB/s, iops=3652
iodepth=4 : bw=32686KB/s, iops=8171
iodepth=8 : bw=76175KB/s, iops=19043
iodepth=16 :bw=173651KB/s, iops=43412
iodepth=32 :bw=336719KB/s, iops=84179


I'm trying multiple versions - from one job and iodepth=1 to 16 jobs with iodepth 32, similar to what You do.

I'm less worried about the bandwidth now, since I found out about the Intel SSD 530 problem (the dsync stuff).

I'm worried about iops - when I test it locally I get the expected ~40k iops on a ssd drive, but when I do it from a client I get 2-4k iops..

(This should be similar with rados bench -t (threads) option).

This is normal because of network latencies + ceph latencies.
Doing more parallism increase iops.


yes, I'm expecting that, but for now I can't get close to what I should see using SSD as an OSD in ceph..

(doing a bench with "dd" = iodepth=1)


I'm only using dd to test seq read/write speed.

Theses result are with 1 client/rbd volume.


now with more fio client (numjobs=X)

I can reach up to 300kiops with 8-10 clients.


I would love to see these results in my setup :)

J
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux