Since your OSDs are either not running or can't communicate with the monitors, there should be some indication from those steps.
-Greg
On Sun, Nov 2, 2014 at 6:44 AM Shiv Raj Singh <virk.shiv@xxxxxxxxx> wrote:
_______________________________________________Hi AllI am new to ceph and I have been trying to configure 3 node ceph cluster with 1 monitor and 2 osd nodes. I have reinstall and recreated the cluster three teams and I ma stuck against the wall . My monitor is working as desired (I guess) but the status of the ods is down. I am following this link http://docs.ceph.com/docs/v0.80.5/install/manual-deployment/ for configuring the osd. The reason why I am not using ceph-deply is because I want to understand the technology.can someone please help e udnerstand what im doing wrong !! :-) !!Some useful diagnostic informationceph2:~$ ceph osd tree# id weight type name up/down reweight-1 2 root default-3 1 host ceph20 1 osd.0 down 0-2 1 host ceph31 1 osd.1 down 0ceph health detailHEALTH_WARN 64 pgs stuck inactive; 64 pgs stuck uncleanpg 0.22 is stuck inactive since forever, current state creating, last acting []pg 0.21 is stuck inactive since forever, current state creating, last acting []pg 0.20 is stuck inactive since forever, current state creating, last acting []ceph -scluster a04ee359-82f8-44c4-89b5-60811bef3f19health HEALTH_WARN 64 pgs stuck inactive; 64 pgs stuck uncleanmonmap e1: 1 mons at {ceph1=192.168.101.41:6789/0}, election epoch 1, quorum 0 ceph1osdmap e9: 2 osds: 0 up, 0 inpgmap v10: 64 pgs, 1 pools, 0 bytes data, 0 objects0 kB used, 0 kB / 0 kB avail64 creatingMy configurations are as below:sudo nano /etc/ceph/ceph.conf[global]fsid = a04ee359-82f8-44c4-89b5-60811bef3f19mon initial members = ceph1mon host = 192.168.101.41public network = 192.168.101.0/24auth cluster required = cephxauth service required = cephxauth client required = cephx[osd]osd journal size = 1024filestore xattr use omap = trueosd pool default size = 2osd pool default min size = 1osd pool default pg num = 333osd pool default pgp num = 333osd crush chooseleaf type = 1[mon.ceph1]host = ceph1mon addr = 192.168.101.41:6789[osd.0]host = ceph2#devs = {path-to-device}[osd.1]host = ceph3#devs = {path-to-device}..........OSD mount locationOn ceph2/dev/sdb1 5.0G 1.1G 4.0G 21% /var/lib/ceph/osd/ceph-0on Ceph3/dev/sdb1 5.0G 1.1G 4.0G 21% /var/lib/ceph/osd/ceph-1My Linux OSlsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 14.04 LTSRelease: 14.04Codename: trustyRegardsShiv
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com