Hi,
Thanks,
When I take down two OSDs crush weight to zero in a cluster with 575 OSDs with all flags set to not rebalance there is an insane spike in client IO and Bandwidth for few seconds and then when the flags are removed too many slow requests every few seconds. Does anyone know why it happens, is it a bug? We are using Ceph community edition 12.2.11 all across with some end client still on Hammer
Ceph Output:
============
cluster:
id: h1579737-a2n9-49cd-c6fc-8da952488120
health: HEALTH_WARN
nodown,noout,nobackfill,norebalance,norecover,noscrub,nodeep-scrub flag(s) set
2043035/104435943 objects misplaced (1.956%)
Reduced data availability: 53 pgs inactive, 46 pgs peering
140 slow requests are blocked > 32 sec. Implicated osds 356,364
services:
mon: 3 daemons, quorum por1d300,por1d301,por1d302
mgr: por1d300(active), standbys: por1d301, por1d302
osd: 575 osds: 574 up, 574 in; 1069 remapped pgs
flags nodown,noout,nobackfill,norebalance,norecover,noscrub,nodeep-scrub
rgw: 1 daemon active
data:
pools: 11 pools, 21888 pgs
objects: 34.81M objects, 158TiB
usage: 475TiB used, 568TiB / 1.02PiB avail
pgs: 1.754% pgs not active
2043035/104435943 objects misplaced (1.956%)
20798 active+clean
604 active+remapped+backfill_wait
234 activating+remapped
101 active+remapped+backfilling
82 remapped+peering
52 peering
16 activating
1 active+recovering
io:
client: 5.85TiB/s rd, 4.96TiB/s wr, 150.50Mop/s rd, 123.73Mop/s wr
recovery: 673GiB/s, 7.32kkeys/s, 148.97kobjects/s
Ceph Features Output:
===================
{
"mon": {
"group": {
"features": "0x3ffddff8eea",
"release": "luminous",
"num": 3
}
},
"osd": {
"group": {
"features": "0x3ffddff8eea",
"release": "luminous",
"num": 574
}
},
"client": {
"group": {
"features": "0x106b84a8",
"release": "hammer",
"num": 2
},
"group": {
"features": "0x81dff8eea",
"release": "hammer",
"num": 1181
},
"group": {
"features": "0x3ffddff8eea",
"release": "luminous",
"num": 2376
},
"group": {
"features": "0x3ffddff8eea",
"release": "luminous",
"num": 2789
}
}
}
Pardhiv
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com