Hello Gesiel, Welcome to Ceph! In the future, you may want to address the ceph-users list (`ceph-users@xxxxxxxxxxxxxx`) for this sort of issues. On 11/08/2018 11:18 AM, Gesiel Galvão Bernardes wrote: > Hi everyone, > > I am a beginner in Ceph. I made a increase of pg_num in a pool, and > after the cluster rebalance I increased pgp_num (a confission: I not > had read the complete documentation about this operation :-( ). Then > after this my cluster broken, and stoped all. The cluster not rebalance, > and my impression is that are all stopped. > > Below is my "ceph -s". Can anyone help-me? You have two osds down. Depending on how your data is mapped, your pgs may be waiting for those to come back up before they finish being cleaned up. -Joao > > +++++++ > cluster: > id: ab5dcb0c-480d-419c-bcb8-013cbcce5c4d > health: HEALTH_WARN > 14402/995493 objects misplaced (1.447%) > Reduced data availability: 348 pgs inactive, 313 pgs peering > > services: > mon: 3 daemons, quorum cmonitor,thanos,cmonitor2 > mgr: thanos(active), standbys: cmonitor > osd: 19 osds: 17 up, 17 in; 221 remapped pgs > > data: > pools: 1 pools, 1024 pgs > objects: 331.8 k objects, 1.3 TiB > usage: 3.8 TiB used, 7.4 TiB / 11 TiB avail > pgs: 1.660% pgs unknown > 32.324% pgs not active > 14402/995493 objects misplaced (1.447%) > 676 active+clean > 186 remapped+peering > 127 peering > 18 activating+remapped > 17 unknown > > At > Gesiel > > > > _______________________________________________ > Ceph-community mailing list > Ceph-community@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com