Error initializing cluster client: Error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



07 ???? 2014 ?., ? 22:07, Gregory Farnum <greg at inktank.com> ???????(?):

> Do you have a ceph.conf file that the "ceph" tool can access in a
> known location? Try specifying it manually with the "-c ceph.conf"

Genius! -c helped!

I have installed all ceph components (monitors and osds) in separate docker containers, single ceph.conf is mounted to each container as /ceph.conf and "ceph" tool sometimes can read it, sometimes not... May be it's a docker issue, not ceph itself.

Many thanks,
Pavel.



> argument. You can also add "--debug-ms 1, --debug-monc 10" and see if
> it outputs more useful error logs.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> 
> 
> On Sat, Jul 5, 2014 at 2:23 AM, Pavel V. Kaygorodov <pasha at inasan.ru> wrote:
>> Hi!
>> 
>> I still have the same problem with "Error initializing cluster client: Error" on all monitor nodes:
>> 
>> root at bastet-mon2:~# ceph -w
>> Error initializing cluster client: Error
>> 
>> root at bastet-mon2:~# ceph --admin-daemon /var/run/ceph/ceph-mon.2.asok mon_status
>> { "name": "2",
>> "rank": 1,
>> "state": "peon",
>> "election_epoch": 1566,
>> "quorum": [
>>       0,
>>       1,
>>       2],
>> "outside_quorum": [],
>> "extra_probe_peers": [],
>> "sync_provider": [],
>> "monmap": { "epoch": 3,
>>     "fsid": "fffeafa2-a664-48a7-979a-517e3ffa0da1",
>>     "modified": "2014-06-19 18:16:01.074917",
>>     "created": "2014-06-19 18:14:43.350501",
>>     "mons": [
>>           { "rank": 0,
>>             "name": "1",
>>             "addr": "10.92.8.80:6789\/0"},
>>           { "rank": 1,
>>             "name": "2",
>>             "addr": "10.92.8.81:6789\/0"},
>>           { "rank": 2,
>>             "name": "3",
>>             "addr": "10.92.8.82:6789\/0"}]}}
>> 
>> root at bastet-mon2:~# ceph --admin-daemon /var/run/ceph/ceph-mon.2.asok quorum_status
>> { "election_epoch": 1566,
>> "quorum": [
>>       0,
>>       1,
>>       2],
>> "quorum_names": [
>>       "1",
>>       "2",
>>       "3"],
>> "quorum_leader_name": "1",
>> "monmap": { "epoch": 3,
>>     "fsid": "fffeafa2-a664-48a7-979a-517e3ffa0da1",
>>     "modified": "2014-06-19 18:16:01.074917",
>>     "created": "2014-06-19 18:14:43.350501",
>>     "mons": [
>>           { "rank": 0,
>>             "name": "1",
>>             "addr": "10.92.8.80:6789\/0"},
>>           { "rank": 1,
>>             "name": "2",
>>             "addr": "10.92.8.81:6789\/0"},
>>           { "rank": 2,
>>             "name": "3",
>>             "addr": "10.92.8.82:6789\/0"}]}}
>> 
>> root at bastet-mon2:~# ceph --admin-daemon /var/run/ceph/ceph-mon.2.asok version{"version":"0.80.1"}
>> 
>> /////////////////////////////////
>> 
>> The same situation on all 3 monitor nodes, but the cluster is alive and all clients works fine.
>> Any ideas how to fix this?
>> 
>> Pavel.
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux