Ceph Random Read Write Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a question about Ceph's performance
I've built a Ceph  cluster with 3 OSD host, each host's configuration:
 - CPU: 1 x Intel Xeon E5-2620 v4 2.1GHz
 - Memory: 2 x 16GB RDIMM
 - Disk: 2 x 300GB 15K RPM SAS 12Gbps (RAID 1 for OS)
            4 x 800GB Solid State Drive SATA (non-RAID for OSD)(Intel SSD DC S3610)
 - NIC: 1 x 10Gbps (bonding for both public and replicate network).

My ceph.conf: https://pastebin.com/r4pJ3P45
We use this cluster for OpenStack cinder's backend.

We have benchmark this cluster by using 6 VMs, using vdbench. 
Our vdbench script:  https://pastebin.com/9sxhrjie

After test, we got the result:
 - 100% random read: 100.000 IOPS
 - 100% random write: 20.000 IOPS
 - 75% RR - 25% RW: 80.000 IOPS

That results are so low, because we calculate the performance of this cluster is: 112.000 IOPS Write and 1000.000 IOPS Read,

We are using Ceph Jewel 10.2.5-1trusty, kernel 4.4.0.-31 generic, Ubuntu 14.04


Could you help me solve this issue.

Thanks in advance

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux