Re: unable to start monitor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Srikanth,

Try if this helps..

sudo initctl list|grep ceph (should display all ceph daemon)

sudo start ceps-mon-all  (To start ceph all ceph-monitor)

Thanks
-Krishna





On May 7, 2015, at 1:35 PM, Srikanth Madugundi <srikanth.madugundi@xxxxxxxxx> wrote:

Hi,

I am setting up a local instance of ceph cluster with latest source from git hub. The build succeeded and installation was successful, But I could not start the monitor.

The "ceph start" command returns immediately and does not output anything.

$ sudo /etc/init.d/ceph start mon.monitor1

$

$ ls -l /var/lib/ceph/mon/ceph-monitor1/

total 8

-rw-r--r-- 1 root root    0 May  7 20:27 done

-rw-r--r-- 1 root root   77 May  7 19:12 keyring

drwxr-xr-x 2 root root 4096 May  7 19:12 store.db

-rw-r--r-- 1 root root    0 May  7 20:26 sysvinit

-rw-r--r-- 1 root root    0 May  7 20:09 upstart



The log filed does not seem to have any details either


$ cat /var/log/ceph/ceph-mon.monitor1.log 


2015-05-07 19:12:13.356389 7f3f06bdb880 -1 did not load config file, using default settings.


$ cat /etc/ceph/ceph.conf 

[global]

mon host = 15.43.33.21

fsid = 92f859df-8b27-466a-8d44-01af2b7ea7e6

mon initial members = monitor1


# Enable authentication

auth cluster required = cephx

auth service required = cephx

auth client required = cephx    


# POOL / PG / CRUSH

osd pool default size = 3  # Write an object 3 times

osd pool default min size = 1 # Allow writing one copy in a degraded state


# Ensure you have a realistic number of placement groups. We recommend 

# approximately 200 per OSD. E.g., total number of OSDs multiplied by 200 

# divided by the number of replicas (i.e., osd pool default size). 

# !! BE CAREFULL !!

# You properly should never rely on the default numbers when creating pool!

osd pool default pg num = 32

osd pool default pgp num = 32


#log file = /home/y/logs/ceph/$cluster-$type.$id.log


# Logging

debug paxos = 0

debug throttle = 0


keyring = /etc/ceph/ceph.client.admin.keyring

#run dir = /home/y/var/run/ceph


[mon]

debug mon = 10

debug ms = 1

# We found that when the disk usage reach to 94%, the disk could not be written

# any file (no free space), so that we lower the full ratio and we should start

# data migration before it becomes full

mon osd full ratio = 0.9

#mon data = "">

mon osd down out interval = 172800 # 2 * 24 * 60 * 60 seconds

# Ceph monitors need to be told how many reporters must to be seen from different

# OSDs before it can be marked offline, this should be greater than the number of

# OSDs per OSD host

mon osd min down reporters = 12

#keyring = /home/y/conf/ceph/ceph.mon.keyring



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux