active+undersized+degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

After managing to configure the osd server I created a pool “data” and removed pool “rbd”

and now the cluster is stuck in active+undersized+degraded

 

$ ceph status

    cluster 046b0180-dc3f-4846-924f-41d9729d48c8

     health HEALTH_WARN

            64 pgs degraded

            64 pgs stuck unclean

            64 pgs undersized

            too few PGs per OSD (21 < min 30)

     monmap e1: 3 mons at {alder=10.6.250.249:6789/0,ash=10.6.250.248:6789/0,aspen=10.6.250.247:6789/0}

            election epoch 6, quorum 0,1,2 aspen,ash,alder

     osdmap e53: 3 osds: 3 up, 3 in

            flags sortbitwise

      pgmap v95: 64 pgs, 1 pools, 0 bytes data, 0 objects

            107 MB used, 22335 GB / 22335 GB avail

                  64 active+undersized+degraded

 

$ ceph osd dump | grep 'replicated size'

pool 2 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 52 flags hashpspool stripe_width 0

 

should I increase the number os pgs and pgps ?

 

--

Dan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux