Here it is:
cluster ac7bc476-3a02-453d-8e5c-606ab6f022ca
health HEALTH_WARN
4 pgs incomplete
4 pgs stuck inactive
4 pgs stuck unclean
1 requests are blocked > 32 sec
monmap e8: 3 mons at {0=10.1.0.12:6789/0,1=10.1.0.14:6789/0,2=10.1.0.17:6789/0}
election epoch 840, quorum 0,1,2 0,1,2
osdmap e2405: 3 osds: 3 up, 3 in
pgmap v5904430: 288 pgs, 4 pools, 391 GB data, 100 kobjects
1090 GB used, 4481 GB / 5571 GB avail
284 active+clean
4 incomplete
client io 4008 B/s rd, 446 kB/s wr, 23 op/s
cluster ac7bc476-3a02-453d-8e5c-606ab6f022ca
health HEALTH_WARN
4 pgs incomplete
4 pgs stuck inactive
4 pgs stuck unclean
1 requests are blocked > 32 sec
monmap e8: 3 mons at {0=10.1.0.12:6789/0,1=10.1.0.14:6789/0,2=10.1.0.17:6789/0}
election epoch 840, quorum 0,1,2 0,1,2
osdmap e2405: 3 osds: 3 up, 3 in
pgmap v5904430: 288 pgs, 4 pools, 391 GB data, 100 kobjects
1090 GB used, 4481 GB / 5571 GB avail
284 active+clean
4 incomplete
client io 4008 B/s rd, 446 kB/s wr, 23 op/s
2016-03-02 9:31 GMT+01:00 Shinobu Kinjo <skinjo@xxxxxxxxxx>:
Is "ceph -s" still showing you same output?
> cluster ac7bc476-3a02-453d-8e5c-606ab6f022ca
> health HEALTH_WARN
> 4 pgs incomplete
> 4 pgs stuck inactive
> 4 pgs stuck unclean
> monmap e8: 3 mons at
> {0=10.1.0.12:6789/0,1=10.1.0.14:6789/0,2=10.1.0.17:6789/0}
> election epoch 832, quorum 0,1,2 0,1,2
> osdmap e2400: 3 osds: 3 up, 3 in
> pgmap v5883297: 288 pgs, 4 pools, 391 GB data, 100 kobjects
> 1090 GB used, 4481 GB / 5571 GB avail
> 284 active+clean
> 4 incomplete
Cheers,
S
----- Original Message -----
From: "Mario Giammarco" <mgiammarco@xxxxxxxxx>
To: "Lionel Bouton" <lionel-subscription@xxxxxxxxxxx>
Cc: "Shinobu Kinjo" <skinjo@xxxxxxxxxx>, ceph-users@xxxxxxxxxxxxxx
Sent: Wednesday, March 2, 2016 4:27:15 PM
Subject: Re: Help: pool not responding
Tried to set min_size=1 but unfortunately nothing has changed.
Thanks for the idea.
2016-02-29 22:56 GMT+01:00 Lionel Bouton <lionel-subscription@xxxxxxxxxxx>:
> Le 29/02/2016 22:50, Shinobu Kinjo a écrit :
>
> the fact that they are optimized for benchmarks and certainly not
> Ceph OSD usage patterns (with or without internal journal).
>
> Are you assuming that SSHD is causing the issue?
> If you could elaborate on this more, it would be helpful.
>
>
> Probably not (unless they reveal themselves extremely unreliable with Ceph
> OSD usage patterns which would be surprising to me).
>
> For incomplete PG the documentation seems good enough for what should be
> done :
> http://docs.ceph.com/docs/master/rados/operations/pg-states/
>
> The relevant text:
>
> *Incomplete* Ceph detects that a placement group is missing information
> about writes that may have occurred, or does not have any healthy copies.
> If you see this state, try to start any failed OSDs that may contain the
> needed information or temporarily adjust min_size to allow recovery.
>
> We don't have the full history but the most probable cause of these
> incomplete PGs is that min_size is set to 2 or 3 and at some time the 4
> incomplete pgs didn't have as many replica as the min_size value. So if
> setting min_size to 2 isn't enough setting it to 1 should unfreeze them.
>
> Lionel
>
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com