root@virt2:~# iperf -c 10.10.10.81
------------------------------------------------------------
Client connecting to 10.10.10.81, TCP port 5001
TCP window size: 1.78 MByte (default)
------------------------------------------------------------
[ 3] local 10.10.10.82 port 57132 connected with 10.10.10.81 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 10.5 GBytes 9.02 Gbits/sec
On Mon, Nov 20, 2017 at 1:22 PM, Sébastien VIGNERON <sebastien.vigneron@xxxxxxxxx> wrote:
Hi,MTU size? Did you ran an iperf test to see raw bandwidth?Cordialement / Best regards,
Sébastien VIGNERON
CRIANN,
Ingénieur / Engineer
Technopôle du Madrillet
745, avenue de l'Université
76800 Saint-Etienne du Rouvray - France
tél. +33 2 32 91 42 91
fax. +33 2 32 91 42 92
http://www.criann.fr
mailto:sebastien.vigneron@criann.fr
support: support@xxxxxxxxxLe 20 nov. 2017 à 11:58, Rudi Ahlers <rudiahlers@xxxxxxxxx> a écrit :______________________________As matter of interest, when I ran the test, the network throughput reached 3.98Gb/s:ens2f0 / traffic statisticsrx | tx--------------------------------------+------------------ bytes 2.59 GiB | 4.63 GiB--------------------------------------+------------------ max 2.29 Gbit/s | 3.98 Gbit/saverage 905.58 Mbit/s | 1.62 Gbit/smin 203 kbit/s | 186 kbit/s--------------------------------------+------------------ packets 1980792 | 3354372--------------------------------------+------------------ max 207630 p/s | 342902 p/saverage 82533 p/s | 139765 p/smin 51 p/s | 56 p/s--------------------------------------+------------------ time 24 secondsSome more stats:root@virt2:~# rados bench -p Data 10 seqhints = 1sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)0 0 0 0 0 0 - 01 16 402 386 1543.69 1544 0.00182802 0.03954212 16 773 757 1513.71 1484 0.00243911 0.0409455Total time run: 2.340037Total reads made: 877Read size: 4194304Object size: 4194304Bandwidth (MB/sec): 1499.12Average IOPS: 374Stddev IOPS: 10Max IOPS: 386Min IOPS: 371Average Latency(s): 0.0419036Max latency(s): 0.176739Min latency(s): 0.00161271root@virt2:~# rados bench -p Data 10 randhints = 1sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)0 0 0 0 0 0 - 01 16 376 360 1439.71 1440 0.0356502 0.04090242 16 752 736 1471.74 1504 0.0163304 0.04190633 16 1134 1118 1490.43 1528 0.059643 0.04170434 16 1515 1499 1498.78 1524 0.0502131 0.04160875 15 1880 1865 1491.79 1464 0.017407 0.04141586 16 2254 2238 1491.79 1492 0.0657474 0.04204717 15 2509 2494 1424.95 1024 0.00182097 0.04400638 15 2873 2858 1428.81 1456 0.0302541 0.04393199 15 3243 3228 1434.47 1480 0.108037 0.043810610 16 3616 3600 1439.81 1488 0.0295953 0.0436184Total time run: 10.058519Total reads made: 3616Read size: 4194304Object size: 4194304Bandwidth (MB/sec): 1437.99Average IOPS: 359Stddev IOPS: 37Max IOPS: 382Min IOPS: 256Average Latency(s): 0.0438002Max latency(s): 0.664223Min latency(s): 0.00156885On Mon, Nov 20, 2017 at 12:38 PM, Rudi Ahlers <rudiahlers@xxxxxxxxx>wrote: Hi,Can someone please help me, how do I improve performance on ou CEPH cluster?The hardware in use are as follows:3x SuperMicro servers with the following configuration12Core Dual XEON 2.2Ghz128GB RAM2x 400GB Intel DC SSD drives4x 8TB Seagate 7200rpm 6Gbps SATA HDD's1x SuperMicro DOM for Proxmox / Debian OS4x Port 10Gbe NICCisco 10Gbe switch.root@virt2:~# rados bench -p Data 10 write --no-cleanuphints = 1Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objectsObject prefix: benchmark_data_virt2_39099sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)0 0 0 0 0 0 - 01 16 85 69 275.979 276 0.185576 0.2041462 16 171 155 309.966 344 0.0625409 0.1935583 16 243 227 302.633 288 0.0547129 0.198354 16 330 314 313.965 348 0.0959492 0.1998255 16 413 397 317.565 332 0.124908 0.1961916 16 494 478 318.633 324 0.1556 0.1970147 15 591 576 329.109 392 0.136305 0.1921928 16 670 654 326.965 312 0.0703808 0.1906439 16 757 741 329.297 348 0.165211 0.19218310 16 828 812 324.764 284 0.0935803 0.194041Total time run: 10.120215Total writes made: 829Write size: 4194304Object size: 4194304Bandwidth (MB/sec): 327.661Stddev Bandwidth: 35.8664Max bandwidth (MB/sec): 392Min bandwidth (MB/sec): 276Average IOPS: 81Stddev IOPS: 8Max IOPS: 98Min IOPS: 69Average Latency(s): 0.195191Stddev Latency(s): 0.0830062Max latency(s): 0.481448Min latency(s): 0.0414858root@virt2:~# hdparm -I /dev/sdaroot@virt2:~# ceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF-1 72.78290 root default-3 29.11316 host virt11 hdd 7.27829 osd.1 up 1.00000 1.000002 hdd 7.27829 osd.2 up 1.00000 1.000003 hdd 7.27829 osd.3 up 1.00000 1.000004 hdd 7.27829 osd.4 up 1.00000 1.00000-5 21.83487 host virt25 hdd 7.27829 osd.5 up 1.00000 1.000006 hdd 7.27829 osd.6 up 1.00000 1.000007 hdd 7.27829 osd.7 up 1.00000 1.00000-7 21.83487 host virt38 hdd 7.27829 osd.8 up 1.00000 1.000009 hdd 7.27829 osd.9 up 1.00000 1.0000010 hdd 7.27829 osd.10 up 1.00000 1.000000 0 osd.0 down 0 1.00000root@virt2:~# ceph -scluster:id: 278a2e9c-0578-428f-bd5b-3bb348923c27 health: HEALTH_OKservices:mon: 3 daemons, quorum virt1,virt2,virt3mgr: virt1(active)osd: 11 osds: 10 up, 10 indata:pools: 1 pools, 512 pgsobjects: 6084 objects, 24105 MBusage: 92822 MB used, 74438 GB / 74529 GB availpgs: 512 active+cleanroot@virt2:~# ceph -wcluster:id: 278a2e9c-0578-428f-bd5b-3bb348923c27 health: HEALTH_OKservices:mon: 3 daemons, quorum virt1,virt2,virt3mgr: virt1(active)osd: 11 osds: 10 up, 10 indata:pools: 1 pools, 512 pgsobjects: 6084 objects, 24105 MBusage: 92822 MB used, 74438 GB / 74529 GB availpgs: 512 active+clean2017-11-20 12:32:08.199450 mon.virt1 [INF] mon.1 10.10.10.82:6789/0The SSD drives are used as journal drives:root@virt3:~# ceph-disk list | grep /dev/sde | grep osd/dev/sdb1 ceph data, active, cluster ceph, osd.8, block /dev/sdb2, block.db /dev/sde1root@virt3:~# ceph-disk list | grep /dev/sdf | grep osd/dev/sdc1 ceph data, active, cluster ceph, osd.9, block /dev/sdc2, block.db /dev/sdf1/dev/sdd1 ceph data, active, cluster ceph, osd.10, block /dev/sdd2, block.db /dev/sdf2I see now /dev/sda doesn't have a journal, though it should have. Not sure why.This is the command I used to create it:pveceph createosd /dev/sda -bluestore 1 -journal_dev /dev/sde----_________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com