Re: IO error: lock /var/lib/ceph/mon/ceph-ceph-node1/store.db/LOCK: Resource temporarily unavailable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Jan

I faced similar kind of errors and these are really annoying. I tried this and worked for me.

 1. Your ceph-node1 is now a monitor node, but it cannot form quorum.
 2. check monitor logs from ceph-node1 on /var/lib/ceph directory. This will give you more strength.
 3. You might need to manually add monitor in cluster. 


  Below is my solution ( you should change it as per your hostname and IP etc)



#### login on monitor node and execute commands like below 
ceph mon add ceph-mon2 192.168.1.33:6789
ceph-mon -i ceph-mon2 --public-addr 192.168.1.33:6789
service ceph status
service ceph restart
ps -ef | grep ceph

In case ceph service is not showing as running in service ceph status command , however ps -ef | grep ceph , shows monitor running , kill the process manually and restart ceph services , monitor now should start working

# service ceph status
# ps -ef | grep ceph
# kill -9 6554

### Finally you would see your ceph cluster is healthy with all the OSD and monitors UP and Running

[root@ceph-mon1 ~]# ceph status
  cluster 0ff473d9-0670-42a3-89ff-81bbfb2e676a
   health HEALTH_OK
   monmap e3: 3 mons at {ceph-mon1=192.168.1.38:6789/0,ceph-mon2=192.168.1.33:6789/0,ceph-mon3=192.168.1.31:6789/0}, election epoch 10, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
   osdmap e97: 8 osds: 8 up, 8 in
    pgmap v246: 192 pgs: 192 active+clean; 0 bytes data, 2352 MB used, 3509 GB / 3512 GB avail
   mdsmap e1: 0/0/1 up
[root@ceph-mon1 ~]#


 All the best :-)

Many Thanks
Karan Singh


----- Original Message -----
From: "Jan Kalcic" <jandot@xxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Thursday, 5 December, 2013 11:16:32 AM
Subject:  IO error: lock /var/lib/ceph/mon/ceph-ceph-node1/store.db/LOCK: Resource temporarily unavailable

Hi all,

approaching ceph today for the first time, so apologize for the basic 
questions I promise I will do all my homework :-)

Following the documentation "storage cluster quick start" I am soon 
stuck with the issue below while creating a first mon:

ceph-admin # ceph-deploy mon create ceph-node1
[ceph_deploy.cli][INFO  ] Invoked (1.3.3): /usr/bin/ceph-deploy mon 
create ceph-node1
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-node1
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-node1 ...
[ceph-node1][DEBUG ] connected to host: ceph-node1
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph_deploy.mon][INFO  ] distro info: SUSE Linux Enterprise Server 11 
x86_64
[ceph-node1][DEBUG ] determining if provided host has same hostname in 
remote
[ceph-node1][DEBUG ] get remote short hostname
[ceph-node1][DEBUG ] deploying mon to ceph-node1
[ceph-node1][DEBUG ] get remote short hostname
[ceph-node1][DEBUG ] remote hostname: ceph-node1
[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node1][DEBUG ] create the mon path if it does not exist
[ceph-node1][DEBUG ] checking for done path: 
/var/lib/ceph/mon/ceph-ceph-node1/done
[ceph-node1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-node1][DEBUG ] create the init path if it does not exist
[ceph-node1][DEBUG ] locating the `service` executable...
[ceph-node1][INFO  ] Running command: sudo rcceph -c /etc/ceph/ceph.conf 
start mon.ceph-node1
[ceph-node1][DEBUG ] === mon.ceph-node1 ===
[ceph-node1][DEBUG ] Starting Ceph mon.ceph-node1 on ceph-node1...
[ceph-node1][DEBUG ] failed: 'ulimit -n 32768;  /usr/bin/ceph-mon -i 
ceph-node1 --pid-file /var/run/ceph/mon.ceph-node1.pid -c 
/etc/ceph/ceph.conf '
[ceph-node1][DEBUG ] Starting ceph-create-keys on ceph-node1...
[ceph-node1][WARNIN] No data was received after 7 seconds, disconnecting...
[ceph-node1][INFO  ] Running command: sudo ceph --cluster=ceph 
--admin-daemon /var/run/ceph/ceph-mon.ceph-node1.asok mon_status
[ceph-node1][ERROR ] admin_socket: exception getting command 
descriptions: [Errno 2] No such file or directory
[ceph-node1][WARNIN] monitor: mon.ceph-node1, might not be running yet
[ceph-node1][INFO  ] Running command: sudo ceph --cluster=ceph 
--admin-daemon /var/run/ceph/ceph-mon.ceph-node1.asok mon_status
[ceph-node1][ERROR ] admin_socket: exception getting command 
descriptions: [Errno 2] No such file or directory
[ceph-node1][WARNIN] monitor ceph-node1 does not exist in monmap
[ceph-node1][WARNIN] neither `public_addr` nor `public_network` keys are 
defined for monitors
[ceph-node1][WARNIN] monitors may not be able to form quorum

It seems ceph-mon does not exit with success, in fact:

ceph-node1 # sudo /usr/bin/ceph-mon -i ceph-node1 --pid-file 
/var/run/ceph/mon.ceph-node1.pid -c /etc/ceph/ceph.conf -d
2013-12-05 10:06:27.429602 7fe06baf9780  0 ceph version 0.72.1 
(4d923861868f6a15dcb33fef7f50f674997322de), process ceph-mon, pid 35662
IO error: lock /var/lib/ceph/mon/ceph-ceph-node1/store.db/LOCK: Resource 
temporarily unavailable
IO error: lock /var/lib/ceph/mon/ceph-ceph-node1/store.db/LOCK: Resource 
temporarily unavailable
2013-12-05 10:06:27.436295 7fe06baf9780 -1 failed to create new leveldb 
store
ceph@ceph-node1:/var/run/ceph> sudo /usr/bin/ceph-mon -i ceph-node1 
--pid-file /var/run/ceph/mon.ceph-node1.pid -c /etc/ceph/ceph.conf -d
IO error: lock /var/lib/ceph/mon/ceph-ceph-node1/store.db/LOCK: Resource 
temporarily unavailable
IO error: lock /var/lib/ceph/mon/ceph-ceph-node1/store.db/LOCK: Resource 
temporarily unavailable
2013-12-05 10:06:43.961756 7fd71d27c780  0 ceph version 0.72.1 
(4d923861868f6a15dcb33fef7f50f674997322de), process ceph-mon, pid 36216
2013-12-05 10:06:43.964079 7fd71d27c780 -1 failed to create new leveldb 
store

Thanks,
Jan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux