Hi All,
We have 5 node clusters with EC 4+1 and use bluestore since last year from Kraken. Ceph Node Public interface : Each node around 960Mbps * 5 node = 4.6 Gbps - this matches.
Ceph status output : show 1032 MB/s = 8.06 Gbps
cn6.chn6us1c1.cdn ~# ceph status
cluster:
id: abda22db-3658-4d33-9681-e3ff10690f88
health: HEALTH_OK
services:
mon: 5 daemons, quorum cn6,cn7,cn8,cn9,cn10
mgr: cn6(active), standbys: cn7, cn9, cn10, cn8
osd: 340 osds: 340 up, 340 in
data:
pools: 1 pools, 8192 pgs
objects: 270M objects, 426 TB
usage: 581 TB used, 655 TB / 1237 TB avail
pgs: 8160 active+clean
32 active+clean+scrubbing
io:
client: 1032 MB/s rd, 168 MB/s wr, 1908 op/s rd, 1594 op/s wr
cn6.chn6us1c1.cdn ~# ceph status
cluster:
id: abda22db-3658-4d33-9681-e3ff10690f88
health: HEALTH_OK
services:
mon: 5 daemons, quorum cn6,cn7,cn8,cn9,cn10
mgr: cn6(active), standbys: cn7, cn9, cn10, cn8
osd: 340 osds: 340 up, 340 in
data:
pools: 1 pools, 8192 pgs
objects: 270M objects, 426 TB
usage: 581 TB used, 655 TB / 1237 TB avail
pgs: 8160 active+clean
32 active+clean+scrubbing
io:
client: 1032 MB/s rd, 168 MB/s wr, 1908 op/s rd, 1594 op/s wr
Write operation we don't see this issue. Client traffic and this matches.
Is this expected behavior in Luminous and ceph-volume lvm or a bug ?
Is this expected behavior in Luminous and ceph-volume lvm or a bug ?
Wrong calculation in ceph status read B/W ?
Please provide your feedback.
Thanks,
Muthu
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com