Em qui, 8 de nov de 2018 às 10:00, Joao Eduardo Luis <joao@xxxxxxx> escreveu:
Hello Gesiel,
Welcome to Ceph!
In the future, you may want to address the ceph-users list
(`ceph-users@xxxxxxxxxxxxxx`) for this sort of issues.
Thank you, I will do.
On 11/08/2018 11:18 AM, Gesiel Galvão Bernardes wrote:
> Hi everyone,
>
> I am a beginner in Ceph. I made a increase of pg_num in a pool, and
> after the cluster rebalance I increased pgp_num (a confission: I not
> had read the complete documentation about this operation :-( ). Then
> after this my cluster broken, and stoped all. The cluster not rebalance,
> and my impression is that are all stopped.
>
> Below is my "ceph -s". Can anyone help-me?
You have two osds down. Depending on how your data is mapped, your pgs
may be waiting for those to come back up before they finish being
cleaned up.
After removed OSD downs, it is tried rebalance, but is "frozen" again, in this status:
cluster:
id: ab5dcb0c-480d-419c-bcb8-013cbcce5c4d
health: HEALTH_WARN
12840/988707 objects misplaced (1.299%)
Reduced data availability: 358 pgs inactive, 325 pgs peering
services:
mon: 3 daemons, quorum cmonitor,thanos,cmonitor2
mgr: thanos(active), standbys: cmonitor
osd: 17 osds: 17 up, 17 in; 221 remapped pgs
data:
pools: 1 pools, 1024 pgs
objects: 329.6 k objects, 1.3 TiB
usage: 3.8 TiB used, 7.4 TiB / 11 TiB avail
pgs: 1.660% pgs unknown
33.301% pgs not active
12840/988707 objects misplaced (1.299%)
666 active+clean
188 remapped+peering
137 peering
17 unknown
16 activating+remapped
Any other idea?
Gesiel
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com