giant release osd down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All

I am new to ceph and I have been trying to configure 3 node ceph cluster with 1 monitor and 2 osd nodes. I have reinstall and recreated the cluster three teams and I ma stuck against the wall . My monitor is working as desired (I guess) but the status of the ods is down. I am following this link http://docs.ceph.com/docs/v0.80.5/install/manual-deployment/ for configuring the osd. The reason why I am not using ceph-deply is because I want to understand the technology. 

can someone please help e udnerstand what im doing wrong !! :-) !! 

Some useful diagnostic information 
ceph2:~$ ceph osd tree
# id    weight  type name       up/down reweight
-1      2       root default
-3      1               host ceph2
0       1                       osd.0   down    0
-2      1               host ceph3
1       1                       osd.1   down    0

ceph health detail
HEALTH_WARN 64 pgs stuck inactive; 64 pgs stuck unclean
pg 0.22 is stuck inactive since forever, current state creating, last acting []
pg 0.21 is stuck inactive since forever, current state creating, last acting []
pg 0.20 is stuck inactive since forever, current state creating, last acting []


ceph -s
    cluster a04ee359-82f8-44c4-89b5-60811bef3f19
     health HEALTH_WARN 64 pgs stuck inactive; 64 pgs stuck unclean
     monmap e1: 1 mons at {ceph1=192.168.101.41:6789/0}, election epoch 1, quorum 0 ceph1
     osdmap e9: 2 osds: 0 up, 0 in
      pgmap v10: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating


My configurations are as below: 

sudo nano /etc/ceph/ceph.conf

[global]

        fsid = a04ee359-82f8-44c4-89b5-60811bef3f19
        mon initial members = ceph1
        mon host = 192.168.101.41
        public network = 192.168.101.0/24

        auth cluster required = cephx
        auth service required = cephx
        auth client required = cephx



[osd]
        osd journal size = 1024
        filestore xattr use omap = true

        osd pool default size = 2
        osd pool default min size = 1
        osd pool default pg num = 333
        osd pool default pgp num = 333
        osd crush chooseleaf type = 1

[mon.ceph1]
        host = ceph1
        mon addr = 192.168.101.41:6789


[osd.0]
        host = ceph2
        #devs = {path-to-device}

[osd.1]
        host = ceph3
        #devs = {path-to-device}

..........

OSD mount location 

On ceph2
/dev/sdb1                              5.0G  1.1G  4.0G  21% /var/lib/ceph/osd/ceph-0

on Ceph3
/dev/sdb1                              5.0G  1.1G  4.0G  21% /var/lib/ceph/osd/ceph-1

My Linux OS 

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 14.04 LTS
Release:        14.04
Codename:       trusty

Regards 

Shiv 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux