Re: Mapped rbd is very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



fio -ioengine=rbd -name=test -bs=4M -iodepth=32 -rw=randwrite -runtime=60  -pool=kube -rbdname=bench
WRITE: bw=89.6MiB/s (93.9MB/s), 89.6MiB/s-89.6MiB/s (93.9MB/s-93.9MB/s), io=5548MiB (5817MB), run=61935-61935msec


fio -ioengine=rbd -name=test -bs=4M -iodepth=32 -rw=randread -runtime=60  -pool=kube -rbdname=bench
READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=8404MiB (8812MB), run=60443-60443msec

this is great. I don't get why I can't get the same in the mapped rbd.

Le vendredi 16 août 2019 à 21:04 +0300, vitalif@xxxxxxxxxx a écrit :
- libaio randwrite
- libaio randread
- libaio randwrite on mapped rbd
- libaio randread on mapped rbd
- rbd read
- rbd write

recheck RBD with RAND READ / RAND WRITE

you're again comparing RANDOM and NON-RANDOM I/O

your SSDs aren't that bad, 3000 single-thread iops isn't the worst 
possible performance
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux