Hi everyone!
I have praticed with Ceph in some weeks. Now I am doing a benchmark for my small Ceph cluster.
I use ansible to deploy daemons Ceph containers to nodes.
Ceph version: Luminous
Benchmark environment comprise:
- 2 Ceph Node( 1 ceph-mon, 1 ceph-mgr, 14 ceph-osd per node)
- 1 Client Node
Benchmark pool:
- Placement Groups: 1024
- Replica number: 2
Each node have 2 networks interface:
- Ceph public network: 10GbitsE
- Ceph cluster network: 10GbitsE
Configuration information of Ceph node:
- CPU: Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz
- Physical cores: 28
- CPUs: 56
- Memory: 125 GB
- Ceph OSD data device: 14 x HDD 1.2TB (15000 RPM)
- OS: Ubuntu 16.04
- Kernel: HWE 4.15.0-72-generic
Configuration information of Client node:
- CPU: Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz
- Memory: 125 GB
- OS: Ubuntu 16.04
- Kernel: HWE 4.15.0-72-generic
Monitors:
- ceph-exporter
- node-exporter
- Promethes
- Grafana
Benchmark
method: in client node, I use FIO tool with rbd engine to get
randread/reandwrite/randrw bench to pre-created RBD volumes.
Regardess
any block size, I have obtained that Read IOPS happened when I
benchmark randwrite. Even Read IOPS is equal or larger than Write IOPS.
Read IOPS will decrease by remaining space of RBD volumes. Read IOPS decrease to 0 when free space of RBD volumes decrease to 0.
And another question: I can obtain write IOPS up to 7000. But as I know that max theory IOPS of Ceph cluster = (# OSDs) x (IOPS per OSD's HDD). In my cluster, this value = 28 x 200 = 5600. And 7000 IOPS larger than this value. I am confusing @@.
Someone can help me explain these case please. Let ask me for any necessary information. Thank you!
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com