Re: Mapped rbd is very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



on a new ceph cluster with the same software and config (ansible) on the old hardware. 2 replica, 1 host, 4 osd.

RBD

fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randread -runtime=60  -pool=kube -rbdname=bench
READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=7189MiB (7538MB), run=60001-60001msec
fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randwrite -runtime=60  -pool=kube -rbdname=bench
WRITE: bw=42.0MiB/s (44.1MB/s), 42.0MiB/s-42.0MiB/s (44.1MB/s-44.1MB/s), io=2522MiB (2645MB), run=60004-60004msec

MAPPED RBD
fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randread -direct=1 -runtime=60 -filename=/dev/rbd/kube/bench
READ: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=10.0GiB (10.7GB), run=55613-55613msec
fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randwrite -direct=1 -runtime=60 -filename=/dev/rbd/kube/bench
WRITE: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=2814MiB (2950MB), run=60002-60002msec

looks much better old the old one

on the cluster with the new hardware. 2 replica 3 host 6 osd.

fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randread -direct=1 -runtime=60 -filename=/dev/rbd/kube/bench
READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=1866MiB (1957MB), run=60010-60010msec
fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randwrite -direct=1 -runtime=60 -filename=/dev/rbd/kube/bench
WRITE: bw=10.5MiB/s (11.0MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.0MB/s), io=631MiB (662MB), run=60021-60021msec

=> New hardware : 32.6MB/s READ / 10.5MiB WRITE
=> Old hardware : 184MiB/s READ / 46.9MiB WRITE

No discussion ? I suppose I will keep the old hardware. What do you think ? :D

Le vendredi 16 août 2019 à 21:17 +0200, Olivier AUDRY a écrit :
RBD
fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randread -runtime=60  -pool=kube -rbdname=bench
READ: bw=21.8MiB/s (22.8MB/s), 21.8MiB/s-21.8MiB/s (22.8MB/s-22.8MB/s), io=1308MiB (1371MB), run=60011-60011msec

fio -ioengine=rbd -name=test -bs=4k -iodepth=32 -rw=randwrite -runtime=60  -pool=kube -rbdname=bench
WRITE: bw=5968KiB/s (6111kB/s), 5968KiB/s-5968KiB/s (6111kB/s-6111kB/s), io=350MiB (367MB), run=60022-60022msec


mapped rbd :
 fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randread -runtime=60 -filename=/dev/rbd/kube/bench
READ: bw=785KiB/s (804kB/s), 785KiB/s-785KiB/s (804kB/s-804kB/s), io=46.0MiB (48.3MB), run=60008-60008msec

old hardware  raw disk:

fio -ioengine=libaio -name=test -bs=4k -iodepth=1 -direct=1 -fsync=1 -rw=randwrite -runtime=60 -filename=/dev/sda4
WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=6401MiB (6712MB), run=60001-60001msec

fio -ioengine=libaio -name=test -bs=4k -iodepth=1 -direct=1 -fsync=1 -rw=randread -runtime=60 -filename=/dev/sda4
READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=2208MiB (2315MB), run=60001-60001msec

new hardware  raw disk :

fio -ioengine=libaio -name=test -bs=4k -iodepth=1 -direct=1 -fsync=1 -rw=randwrite -runtime=60 -filename=/dev/nvme1n1p4
WRITE: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=728MiB (763MB), run=60001-60001msec

fio -ioengine=libaio -name=test -bs=4k -iodepth=1 -direct=1 -fsync=1 -rw=randread -runtime=60 -filename=/dev/nvme1n1p4
READ: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=2134MiB (2237MB), run=60001-60001msec

Le vendredi 16 août 2019 à 21:34 +0300, vitalif@xxxxxxxxxx a écrit :
And once more you're checking random I/O with 4 MB !!! block size.

Now recheck it with bs=4k.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux