As matter of interest, when I ran the test, the network throughput reached 3.98Gb/s:
ens2f0 / traffic statistics
rx | tx
--------------------------------------+------------------
bytes 2.59 GiB | 4.63 GiB
--------------------------------------+------------------
max 2.29 Gbit/s | 3.98 Gbit/s
average 905.58 Mbit/s | 1.62 Gbit/s
min 203 kbit/s | 186 kbit/s
--------------------------------------+------------------
packets 1980792 | 3354372
--------------------------------------+------------------
max 207630 p/s | 342902 p/s
average 82533 p/s | 139765 p/s
min 51 p/s | 56 p/s
--------------------------------------+------------------
time 24 seconds
Some more stats:
root@virt2:~# rados bench -p Data 10 seq
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 402 386 1543.69 1544 0.00182802 0.0395421
2 16 773 757 1513.71 1484 0.00243911 0.0409455
Total time run: 2.340037
Total reads made: 877
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 1499.12
Average IOPS: 374
Stddev IOPS: 10
Max IOPS: 386
Min IOPS: 371
Average Latency(s): 0.0419036
Max latency(s): 0.176739
Min latency(s): 0.00161271
root@virt2:~# rados bench -p Data 10 rand
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 376 360 1439.71 1440 0.0356502 0.0409024
2 16 752 736 1471.74 1504 0.0163304 0.0419063
3 16 1134 1118 1490.43 1528 0.059643 0.0417043
4 16 1515 1499 1498.78 1524 0.0502131 0.0416087
5 15 1880 1865 1491.79 1464 0.017407 0.0414158
6 16 2254 2238 1491.79 1492 0.0657474 0.0420471
7 15 2509 2494 1424.95 1024 0.00182097 0.0440063
8 15 2873 2858 1428.81 1456 0.0302541 0.0439319
9 15 3243 3228 1434.47 1480 0.108037 0.0438106
10 16 3616 3600 1439.81 1488 0.0295953 0.0436184
Total time run: 10.058519
Total reads made: 3616
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 1437.99
Average IOPS: 359
Stddev IOPS: 37
Max IOPS: 382
Min IOPS: 256
Average Latency(s): 0.0438002
Max latency(s): 0.664223
Min latency(s): 0.00156885
On Mon, Nov 20, 2017 at 12:38 PM, Rudi Ahlers <rudiahlers@xxxxxxxxx> wrote:
Hi,Can someone please help me, how do I improve performance on ou CEPH cluster?The hardware in use are as follows:3x SuperMicro servers with the following configuration12Core Dual XEON 2.2Ghz128GB RAM2x 400GB Intel DC SSD drives4x 8TB Seagate 7200rpm 6Gbps SATA HDD's1x SuperMicro DOM for Proxmox / Debian OS4x Port 10Gbe NICCisco 10Gbe switch.root@virt2:~# rados bench -p Data 10 write --no-cleanuphints = 1Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objectsObject prefix: benchmark_data_virt2_39099sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)0 0 0 0 0 0 - 01 16 85 69 275.979 276 0.185576 0.2041462 16 171 155 309.966 344 0.0625409 0.1935583 16 243 227 302.633 288 0.0547129 0.198354 16 330 314 313.965 348 0.0959492 0.1998255 16 413 397 317.565 332 0.124908 0.1961916 16 494 478 318.633 324 0.1556 0.1970147 15 591 576 329.109 392 0.136305 0.1921928 16 670 654 326.965 312 0.0703808 0.1906439 16 757 741 329.297 348 0.165211 0.19218310 16 828 812 324.764 284 0.0935803 0.194041Total time run: 10.120215Total writes made: 829Write size: 4194304Object size: 4194304Bandwidth (MB/sec): 327.661Stddev Bandwidth: 35.8664Max bandwidth (MB/sec): 392Min bandwidth (MB/sec): 276Average IOPS: 81Stddev IOPS: 8Max IOPS: 98Min IOPS: 69Average Latency(s): 0.195191Stddev Latency(s): 0.0830062Max latency(s): 0.481448Min latency(s): 0.0414858root@virt2:~# hdparm -I /dev/sdaroot@virt2:~# ceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF-1 72.78290 root default-3 29.11316 host virt11 hdd 7.27829 osd.1 up 1.00000 1.000002 hdd 7.27829 osd.2 up 1.00000 1.000003 hdd 7.27829 osd.3 up 1.00000 1.000004 hdd 7.27829 osd.4 up 1.00000 1.00000-5 21.83487 host virt25 hdd 7.27829 osd.5 up 1.00000 1.000006 hdd 7.27829 osd.6 up 1.00000 1.000007 hdd 7.27829 osd.7 up 1.00000 1.00000-7 21.83487 host virt38 hdd 7.27829 osd.8 up 1.00000 1.000009 hdd 7.27829 osd.9 up 1.00000 1.0000010 hdd 7.27829 osd.10 up 1.00000 1.000000 0 osd.0 down 0 1.00000root@virt2:~# ceph -scluster:id: 278a2e9c-0578-428f-bd5b-3bb348923c27 health: HEALTH_OKservices:mon: 3 daemons, quorum virt1,virt2,virt3mgr: virt1(active)osd: 11 osds: 10 up, 10 indata:pools: 1 pools, 512 pgsobjects: 6084 objects, 24105 MBusage: 92822 MB used, 74438 GB / 74529 GB availpgs: 512 active+cleanroot@virt2:~# ceph -wcluster:id: 278a2e9c-0578-428f-bd5b-3bb348923c27 health: HEALTH_OKservices:mon: 3 daemons, quorum virt1,virt2,virt3mgr: virt1(active)osd: 11 osds: 10 up, 10 indata:pools: 1 pools, 512 pgsobjects: 6084 objects, 24105 MBusage: 92822 MB used, 74438 GB / 74529 GB availpgs: 512 active+clean2017-11-20 12:32:08.199450 mon.virt1 [INF] mon.1 10.10.10.82:6789/0The SSD drives are used as journal drives:root@virt3:~# ceph-disk list | grep /dev/sde | grep osd/dev/sdb1 ceph data, active, cluster ceph, osd.8, block /dev/sdb2, block.db /dev/sde1root@virt3:~# ceph-disk list | grep /dev/sdf | grep osd/dev/sdc1 ceph data, active, cluster ceph, osd.9, block /dev/sdc2, block.db /dev/sdf1/dev/sdd1 ceph data, active, cluster ceph, osd.10, block /dev/sdd2, block.db /dev/sdf2I see now /dev/sda doesn't have a journal, though it should have. Not sure why.This is the command I used to create it:pveceph createosd /dev/sda -bluestore 1 -journal_dev /dev/sde--
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com