Hi,
I have created a cluster and when I run ceph status it is showing me the wrong number of osds.
cluster 6571de66-75e1-4da7-b1ed-
15a8bfed0944 health HEALTH_WARN
2112 pgs stuck inactive
2112 pgs stuck unclean
monmap e1: 1 mons at {0=10.38.32.245:16789/0}
election epoch 1, quorum 0 0
osdmap e6: 2 osds: 0 up, 0 in
flags sortbitwise
pgmap v7: 2112 pgs, 3 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
2112 creating
I have created only one osd and ceph osd tree also shows two osd’s and both are down.
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0 root default
-3 0 rack unknownrack
-2 0 host Test
0 0 osd.0 down 0 1.00000
1 0 osd.1 down 0 1.00000
On the osd node iam seeing the osd daemon is running.
root 3153 1 0 04:27 pts/0 00:00:00 /opt/ceph/bin/ceph-mon -i 0 --pid-file /ceph-test/var/run/ceph/mon.0.
pid root 4696 1 0 04:42 ? 00:00:00 /opt/ceph/bin/ceph-osd -i 0 --pid-file /ceph-test/var/run/ceph/osd.0.
pid
Could anyone please give me the inputs where could be the issue.
Regards,
Muneendra.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
--
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com