Hi Steven,
Hans
On Apr 19, 2018 13:11, "Steven Vacaroaia" <stef97@xxxxxxxxx> wrote:
Hi,_______________________________________________Any idea why 2 servers with one OSD each will provide better performance than 3 ?Servers are identicalPerformance is impacted irrespective if I used SSD for WAL/DB or notBasically, I am getting lots of cur MB/s zeroNetwork is separate 10 GB for public and privateI tested it with iperf and I am getting 9.3 GbsI have tried replication by 2 and 3 with same results ( much better for 2 servers than 3 )reinstalled CEPH multiple timesceph.conf very simple - no major customization ( see below)I am out of ideas - any hint will be TRULY appreciatedStevenauth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxpublic_network = 10.10.30.0/24cluster_network = 192.168.0.0/24osd_pool_default_size = 2osd_pool_default_min_size = 1 # Allow writing 1 copy in a degraded stateosd_crush_chooseleaf_type = 1[mon]mon_allow_pool_delete = truemon_osd_min_down_reporters = 1[osd]osd_mkfs_type = xfsosd_mount_options_xfs = "rw,noatime,nodiratime,attr2,logbufs=8,logbsize=256k,largeio,inode64,swalloc,allocsize=4M"osd_mkfs_options_xfs = "-f -i size=2048"bluestore_block_db_size = 32212254720bluestore_block_wal_size = 1073741824rados bench -p rbd 120 write --no-cleanup && rados bench -p rbd 120 seqhints = 1Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 120 seconds or 0 objectsObject prefix: benchmark_data_osd01_383626sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)0 0 0 0 0 0 - 01 16 57 41 163.991 164 0.197929 0.0655432 16 57 41 81.992 0 - 0.0655433 16 67 51 67.9936 20 0.0164632 0.2499394 16 67 51 50.9951 0 - 0.2499395 16 71 55 43.9958 8 0.0171439 0.3199736 16 181 165 109.989 440 0.0159057 0.5637467 16 182 166 94.8476 4 0.221421 0.5616848 16 182 166 82.9917 0 - 0.5616849 16 240 224 99.5458 116 0.0232989 0.63829210 16 264 248 99.1901 96 0.0222669 0.58333611 16 264 248 90.1729 0 - 0.58333612 16 285 269 89.6579 42 0.0165706 0.60060613 16 285 269 82.7611 0 - 0.60060614 16 310 294 83.9918 50 0.0254241 0.756351
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com