Is this result to be expected from cephfs, when comparing to a native ssd speed test.
4k r ran. |
4k w ran. |
4k r seq. |
4k w seq. |
1024k r ran. |
1024k w ran. |
1024k r seq. |
1024k w seq. | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
size |
|
lat |
iops |
kB/s |
lat |
iops |
kB/s |
lat |
iops |
MB/s |
lat |
iops |
MB/s |
lat |
iops |
MB/s |
lat |
iops |
MB/s |
lat |
iops |
MB/s |
lat |
iops |
MB/s | ||
Cephfs |
ssd rep. 3 |
2.78 |
1781 |
7297 |
1.42 |
700 |
2871 |
0.29 |
3314 |
13.6 |
0.04 |
889 |
3.64 |
4.3 |
231 |
243 |
0.08 |
132 |
139 |
4.23 |
235 |
247 |
6.99 |
142 |
150 | ||
Cephfs |
ssd rep. 1 |
0.54 |
1809 |
7412 |
0.8 |
1238 |
5071 |
0.29 |
3325 |
13.6 |
0.56 |
1761 |
7.21 |
4.27 |
233 |
245 |
4.34 |
229 |
241 |
4.21 |
236 |
248 |
4.34 |
229 |
241 | ||
Samsung |
MZK7KM480 |
480GB |
0.09 |
10.2k |
41600 |
0.05 |
17.9k |
73200 |
0.05 |
18k |
77.6 |
0.05 |
18.3k |
75.1 |
2.06 |
482 |
506 |
2.16 |
460 |
483 |
1.98 |
502 |
527 |
2.13 |
466 |
489 |
(4 nodes, CentOS7,
luminous)
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
- Prev by Date: Re: ceph-osd processes restart during Luminous -> Mimic upgrade on CentOS 7
- Next by Date: Re: slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10
- Previous by thread: ceph-osd processes restart during Luminous -> Mimic upgrade on CentOS 7
- Next by thread: Best practice creating pools / rbd images
- Index(es):