And the osd tree: $ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 21.81180 root default -2 21.81180 host rimu 0 7.27060 osd.0 up 1.00000 1.00000 1 7.27060 osd.1 up 1.00000 1.00000 2 7.27060 osd.2 up 1.00000 1.00000 From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Dan Nica Hi, After managing to configure the osd server I created a pool “data” and removed pool “rbd” and now the cluster is stuck in active+undersized+degraded $ ceph status cluster 046b0180-dc3f-4846-924f-41d9729d48c8 health HEALTH_WARN 64 pgs degraded 64 pgs stuck unclean 64 pgs undersized too few PGs per OSD (21 < min 30) monmap e1: 3 mons at {alder=10.6.250.249:6789/0,ash=10.6.250.248:6789/0,aspen=10.6.250.247:6789/0} election epoch 6, quorum 0,1,2 aspen,ash,alder osdmap e53: 3 osds: 3 up, 3 in flags sortbitwise pgmap v95: 64 pgs, 1 pools, 0 bytes data, 0 objects 107 MB used, 22335 GB / 22335 GB avail 64 active+undersized+degraded $ ceph osd dump | grep 'replicated size' pool 2 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 52 flags hashpspool stripe_width 0 should I increase the number os pgs and pgps ? -- Dan |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com