Re: Mapped rbd is very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Write and read with 2 hosts 4 osd :

mkfs.ext4 /dev/rbd/kube/bench
mount /dev/rbd/kube/bench /mnt/
dd if=/dev/zero of=test bs=8192k count=1000 oflag=direct
8388608000 bytes (8.4 GB, 7.8 GiB) copied, 117.541 s, 71.4 MB/s

fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randwrite
-direct=1 -runtime=60 -filename=/dev/rbd/kube/bench
WRITE: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-
47.5MB/s), io=2718MiB (2850MB), run=60003-60003msec

fio -ioengine=libaio -name=test -bs=4k -iodepth=32 -rw=randread
-direct=1 -runtime=60 -filename=/dev/rbd/kube/bench
READ: bw=187MiB/s (197MB/s), 187MiB/s-187MiB/s (197MB/s-197MB/s),
io=10.0GiB (10.7GB), run=54636-54636msec

pgbench before : 10 transaction per second
pgbench after : 355 transaction per second

So yes it's better. SSD are INTEL SSDSC2BB48 0370.



Le samedi 17 août 2019 à 01:55 +0300, vitalif@xxxxxxxxxx a écrit :
> > on a new ceph cluster with the same software and config (ansible)
> > on
> > the old hardware. 2 replica, 1 host, 4 osd.
> > 
> > => New hardware : 32.6MB/s READ / 10.5MiB WRITE
> > => Old hardware : 184MiB/s READ / 46.9MiB WRITE
> > 
> > No discussion ? I suppose I will keep the old hardware. What do you
> > think ? :D
> 
> In fact I don't really believe in 184 MB/s random reads with Ceph
> with 4 
> OSDs, it's a very cool result if it's true.
> 
> Does the "new cluster on the old hardware" consist of only 1 host?
> Did 
> you test reads before you actually wrote anything into the image so
> it 
> was empty and reads were fast because of that?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux