health HEALTH_WARN too few pgs per osd (16 < min 20)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wednesday, May 7, 2014 at 20:28, *sm1Ly wrote:
> 
> [sm1ly at salt1 ceph]$ sudo ceph -s
>     cluster 0b2c9c20-985a-4a39-af8e-ef2325234744
>      health HEALTH_WARN 19 pgs degraded; 192 pgs stuck unclean; recovery 21/42 objects degraded (50.000%); too few pgs per osd (16 < min 20)
> 

You might need to adjust default number of PGs per pool and recreate pools.
http://ceph.com/docs/master/rados/operations/placement-groups/
http://ceph.com/docs/master/rados/operations/pools/#createpool

>      monmap e1: 3 mons at {mon1=10.60.0.110:6789/0,mon2=10.60.0.111:6789/0,mon3=10.60.0.112:6789/0 (http://10.60.0.110:6789/0,mon2=10.60.0.111:6789/0,mon3=10.60.0.112:6789/0)}, election epoch 6, quorum 0,1,2 mon1,mon2,mon3
>      mdsmap e6: 1/1/1 up {0=mds1=up:active}, 2 up:standby
>      osdmap e61: 12 osds: 12 up, 12 in
>       pgmap v103: 192 pgs, 3 pools, 9470 bytes data, 21 objects
> 



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140507/80b24ec1/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux