On Mon, Jul 10, 2017 at 12:57 AM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
I need a little help with fixing some errors I am having.
After upgrading from Kraken im getting incorrect values reported on
placement groups etc. At first I thought it is because I was changing
the public cluster ip address range and modifying the monmap directly.
But after deleting and adding a monitor this ceph daemon dump is still
incorrect.
ceph daemon mon.a perf dump cluster
{
"cluster": {
"num_mon": 3,
"num_mon_quorum": 3,
"num_osd": 6,
"num_osd_up": 6,
"num_osd_in": 6,
"osd_epoch": 3842,
"osd_bytes": 0,
"osd_bytes_used": 0,
"osd_bytes_avail": 0,
"num_pool": 0,
"num_pg": 0,
"num_pg_active_clean": 0,
"num_pg_active": 0,
"num_pg_peering": 0,
"num_object": 0,
"num_object_degraded": 0,
"num_object_misplaced": 0,
"num_object_unfound": 0,
"num_bytes": 0,
"num_mds_up": 1,
"num_mds_in": 1,
"num_mds_failed": 0,
"mds_epoch": 816
}
}
Huh, I didn't know that existed.
So, yep, most of those values aren't updated any more. From a grep, you can still trust:
num_mon
num_mon_quorum
num_osd
num_osd_up
num_osd_in
osd_epoch
num_mds_up
num_mds_in
num_mds_failed
mds_epoch
We might be able to keep updating the others when we get reports from the manager, but it'd be simpler to just rip them out — I don't think the admin socket is really the right place to get cluster summary data like this. Sage, any thoughts?
-Greg
2017-07-10 09:51:54.219167 7f5cb7338700 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
cluster:
id: 0f1701f5-453a-4a3b-928d-f652a2bbbcb0
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c
mgr: c(active), standbys: a, b
mds: 1/1/1 up {0=c=up:active}, 1 up:standby
osd: 6 osds: 6 up, 6 in
data:
pools: 4 pools, 328 pgs
objects: 5224k objects, 889 GB
usage: 2474 GB used, 28264 GB / 30739 GB avail
pgs: 327 active+clean
1 active+clean+scrubbing+deep
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com