On 05/04/17 13:37, Fuxion Cloud wrote:
Hi,
Our ceph version is 0.80.7. We used it with the openstack
as a block storage RBD. The ceph storage configured with 3
replication of data. I'm getting low IOPS (400) from fio
benchmark in random readwrite. Please advise how to improve
it. Thanks.
I'll let others comment on whether 0.80.7 is too old and you should
obviously upgrade... I don't think anyone should be using anything
older than hammer which is the previous nearly EoL LTS version.
Here's the hardware info.
12 x storage nodes
- 2 x cpus (12 cores)
- 64 GB RAM
- 10 x 4TB SAS 7.2krpm OSD
- 2 x 200GB SSD Journal
- 2 x 200GB SSD OS
5 osds per journal sounds like too many.
Which model are the SSDs?
How large are the journals?
When you run your fio test, what is the command you run? which type
of storage? rbd fuse? krbd? fio --engine=rbd?
When you run fio, what does iostat show you? Would you say the HDDs
are the bottleneck, or the SSDs?
iostat -xm 1 /dev/sd[a-z]
- 2 x 10Gb (bond - ceph network)
- 2 x 10Gb (bond - openstack network)
What kind of link do you have between racks?
What is the failure domain? rack or host?
What is the size (replication size) of the pool you are testing?
Ceph status:
health HEALTH_OK
osdmap e116285: 120 osds: 120 up, 120 in
pgmap v70119491: 14384 pgs, 5 pools, 5384 GB data,
841 kobjects
16774 GB used, 397 TB / 413 TB avail
14384 active+clean
client io 11456 kB/s rd, 13389 kB/s wr, 420 op/s
Ceph osd tree:
# id weight type
name up/down reweight
-1 414 root
default
-14 207 rack
rack1
-3 34.5 host
node1
1 3.45 osd.1 up 1
4 3.45 osd.4 up 1
7 3.45 osd.7 up 1
10 3.45 osd.10 up 1
13 3.45 osd.13 up 1
16 3.45 osd.16 up 1
19 3.45 osd.19 up 1
22 3.45 osd.22 up 1
25 3.45 osd.25 up 1
28 3.45 osd.28 up 1
-4 34.5 host
node2
5 3.45 osd.5 up 1
11 3.45 osd.11 up 1
14 3.45 osd.14 up 1
17 3.45 osd.17 up 1
20 3.45 osd.20 up 1
23 3.45 osd.23 up 1
26 3.45 osd.26 up 1
29 3.45 osd.29 up 1
38 3.45 osd.38 up 1
2 3.45 osd.2 up 1
-5 34.5 host
node3
31 3.45 osd.31 up 1
48 3.45 osd.48 up 1
57 3.45 osd.57 up 1
66 3.45 osd.66 up 1
75 3.45 osd.75 up 1
84 3.45 osd.84 up 1
93 3.45 osd.93 up 1
102 3.45 osd.102 up 1
111 3.45 osd.111 up 1
39 3.45 osd.39 up 1
-7 34.5 host
node4
35 3.45 osd.35 up 1
46 3.45 osd.46 up 1
55 3.45 osd.55 up 1
64 3.45 osd.64 up 1
72 3.45 osd.72 up 1
81 3.45 osd.81 up 1
90 3.45 osd.90 up 1
98 3.45 osd.98 up 1
107 3.45 osd.107 up 1
116 3.45 osd.116 up 1
-10 34.5 host
node5
43 3.45 osd.43 up 1
54 3.45 osd.54 up 1
60 3.45 osd.60 up 1
67 3.45 osd.67 up 1
78 3.45 osd.78 up 1
87 3.45 osd.87 up 1
96 3.45 osd.96 up 1
104 3.45 osd.104 up 1
113 3.45 osd.113 up 1
8 3.45 osd.8 up 1
-13 34.5 host
node6
32 3.45 osd.32 up 1
47 3.45 osd.47 up 1
56 3.45 osd.56 up 1
65 3.45 osd.65 up 1
74 3.45 osd.74 up 1
83 3.45 osd.83 up 1
92 3.45 osd.92 up 1
110 3.45 osd.110 up 1
119 3.45 osd.119 up 1
101 3.45 osd.101 up 1
-15 207 rack
rack2
-2 34.5 host
node7
0 3.45 osd.0 up 1
3 3.45 osd.3 up 1
6 3.45 osd.6 up 1
9 3.45 osd.9 up 1
12 3.45 osd.12 up 1
15 3.45 osd.15 up 1
18 3.45 osd.18 up 1
21 3.45 osd.21 up 1
24 3.45 osd.24 up 1
27 3.45 osd.27 up 1
-6 34.5 host
node8
30 3.45 osd.30 up 1
40 3.45 osd.40 up 1
49 3.45 osd.49 up 1
58 3.45 osd.58 up 1
68 3.45 osd.68 up 1
77 3.45 osd.77 up 1
86 3.45 osd.86 up 1
95 3.45 osd.95 up 1
105 3.45 osd.105 up 1
114 3.45 osd.114 up 1
-8 34.5 host
node9
33 3.45 osd.33 up 1
45 3.45 osd.45 up 1
52 3.45 osd.52 up 1
59 3.45 osd.59 up 1
73 3.45 osd.73 up 1
82 3.45 osd.82 up 1
91 3.45 osd.91 up 1
100 3.45 osd.100 up 1
108 3.45 osd.108 up 1
117 3.45 osd.117 up 1
-9 34.5 host
node10
36 3.45 osd.36 up 1
42 3.45 osd.42 up 1
51 3.45 osd.51 up 1
61 3.45 osd.61 up 1
69 3.45 osd.69 up 1
76 3.45 osd.76 up 1
85 3.45 osd.85 up 1
94 3.45 osd.94 up 1
103 3.45 osd.103 up 1
112 3.45 osd.112 up 1
-11 34.5 host
node11
50 3.45 osd.50 up 1
63 3.45 osd.63 up 1
71 3.45 osd.71 up 1
79 3.45 osd.79 up 1
89 3.45 osd.89 up 1
106 3.45 osd.106 up 1
115 3.45 osd.115 up 1
34 3.45 osd.34 up 1
120 3.45 osd.120 up 1
121 3.45 osd.121 up 1
-12 34.5 host
node12
37 3.45 osd.37 up 1
44 3.45 osd.44 up 1
53 3.45 osd.53 up 1
62 3.45 osd.62 up 1
70 3.45 osd.70 up 1
80 3.45 osd.80 up 1
88 3.45 osd.88 up 1
99 3.45 osd.99 up 1
109 3.45 osd.109 up 1
118 3.45 osd.118 up 1
Thanks,
James
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney@xxxxxxxxxxxxxxxxxxxx
Internet: http://www.brockmann-consult.de
--------------------------------------------
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com