ceph status showing wrong osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have created a cluster and when I run ceph status it is showing me the wrong number of osds.

 

cluster 6571de66-75e1-4da7-b1ed-15a8bfed0944

     health HEALTH_WARN

            2112 pgs stuck inactive

            2112 pgs stuck unclean

     monmap e1: 1 mons at {0=10.38.32.245:16789/0}

            election epoch 1, quorum 0 0

     osdmap e6: 2 osds: 0 up, 0 in

            flags sortbitwise

      pgmap v7: 2112 pgs, 3 pools, 0 bytes data, 0 objects

            0 kB used, 0 kB / 0 kB avail

                2112 creating

 

I have created only one osd and ceph osd tree also shows two osdâ??s and both are down.

ID WEIGHT TYPE NAME              UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1      0 root default

-3      0     rack unknownrack

-2      0         host Test

0      0 osd.0                     down        0          1.00000

1      0 osd.1                     down        0          1.00000

 

 

On the osd node iam seeing the osd daemon is running.

 

root        3153       1  0 04:27 pts/0    00:00:00 /opt/ceph/bin/ceph-mon -i 0 --pid-file /ceph-test/var/run/ceph/mon.0.pid

root        4696       1  0 04:42 ?        00:00:00 /opt/ceph/bin/ceph-osd -i 0 --pid-file /ceph-test/var/run/ceph/osd.0.pid

 

Could anyone please give me the inputs where could be the issue.

 

 

Regards,

Muneendra.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux