Re: ceph luminous 12.2.4 - 2 servers better than 3 ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Try filestore instead of bluestore ?

 

- Rado

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of Steven Vacaroaia
Sent: Thursday, April 19, 2018 8:11 AM
To: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: [ceph-users] ceph luminous 12.2.4 - 2 servers better than 3 ?

 

Hi,

 

Any idea why 2 servers with one OSD each will provide better performance than 3 ?

 

Servers are identical 

Performance  is impacted irrespective if I used SSD for WAL/DB or not

Basically, I am getting lots of cur MB/s zero  

 

Network is separate 10 GB for public and private 

I tested it with iperf and I am getting 9.3 Gbs 

 

I have tried replication by 2 and 3 with same results ( much better for 2 servers than 3 )

 

reinstalled CEPH multiple times 

ceph.conf very simple - no major customization ( see below) 

I am out of ideas - any hint will be TRULY appreciated 

 

Steven 

 

 

 

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

 

 

public_network = 10.10.30.0/24

cluster_network = 192.168.0.0/24

 

 

osd_pool_default_size = 2

osd_pool_default_min_size = 1 # Allow writing 1 copy in a degraded state

osd_crush_chooseleaf_type = 1

 

 

[mon]

mon_allow_pool_delete = true

mon_osd_min_down_reporters = 1

 

[osd]

osd_mkfs_type = xfs

osd_mount_options_xfs = "rw,noatime,nodiratime,attr2,logbufs=8,logbsize=256k,largeio,inode64,swalloc,allocsize=4M"

osd_mkfs_options_xfs = "-f -i size=2048"

bluestore_block_db_size = 32212254720

bluestore_block_wal_size = 1073741824

 

rados bench -p rbd 120 write --no-cleanup && rados bench -p rbd 120 seq

hints = 1

Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 120 seconds or 0 objects

Object prefix: benchmark_data_osd01_383626

  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)

    0       0         0         0         0         0           -           0

    1      16        57        41   163.991       164    0.197929    0.065543

    2      16        57        41    81.992         0           -    0.065543

    3      16        67        51   67.9936        20   0.0164632    0.249939

    4      16        67        51   50.9951         0           -    0.249939

    5      16        71        55   43.9958         8   0.0171439    0.319973

    6      16       181       165   109.989       440   0.0159057    0.563746

    7      16       182       166   94.8476         4    0.221421    0.561684

    8      16       182       166   82.9917         0           -    0.561684

    9      16       240       224   99.5458       116   0.0232989    0.638292

   10      16       264       248   99.1901        96   0.0222669    0.583336

   11      16       264       248   90.1729         0           -    0.583336

   12      16       285       269   89.6579        42   0.0165706    0.600606

   13      16       285       269   82.7611         0           -    0.600606

   14      16       310       294   83.9918        50   0.0254241    0.756351

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux