And more interestingly 0.77-601-ge92d602 from 25 Feb does *not* have
this issue - so looks like something in the 0.77 development between 25
Feb and 18 Mar is triggering this.
On 25/03/14 16:52, Mark Kirkwood wrote:
Further - checking with 0.77 from 18th Mar shows the same problem, but
0.73 from 12 Dec 2013 does not have this issue. So anyway, looks like
it is not a 0.78 problem, but is some sort of problem!
Regards
Mark
On 25/03/14 15:09, Mark Kirkwood wrote:
Fresh clone and rebuild results in the same error.
$ /usr/bin/ceph --version
ceph version 0.78-325-ge5a4f5e
(e5a4f5ed005c9349be94b19ef33d6fe08271c798)
/usr/bin/ceph-mon --version
ceph version 0.78-325-ge5a4f5e
(e5a4f5ed005c9349be94b19ef33d6fe08271c798)
On 25/03/14 14:16, Mark Kirkwood wrote:
Yeah, that is my feeling too - however both ceph and ceph-mon claim
to be the same version...and the dates on the various binaries are
all today. But I'll rebuild anyway to double check!
Thanks
Mark
On 25/03/14 13:57, Gregory Farnum wrote:
That is pretty strange, but I think probably there's somehow a
mismatch
between the installed versions. Can you check with the --version
flag on
both binaries?
On Monday, March 24, 2014, Mark Kirkwood
<mark.kirkwood@xxxxxxxxxxxxxxx>
wrote:
Hi,
I'm redeploying my development cluster after building 0.78 from
src on
Ubuntu 14.04. Ceph version is ceph version 0.78-325-ge5a4f5e.
So proceeding as usual:
$ ceph-deploy new vedavec
$ ceph-deploy mon create vedavec
The monitor comes up:
[vedavec][DEBUG ] ******************************
**************************************************
[vedavec][DEBUG ] status for monitor: mon.vedavec
[vedavec][DEBUG ] {
[vedavec][DEBUG ] "election_epoch": 2,
[vedavec][DEBUG ] "extra_probe_peers": [],
[vedavec][DEBUG ] "monmap": {
[vedavec][DEBUG ] "created": "0.000000",
[vedavec][DEBUG ] "epoch": 1,
[vedavec][DEBUG ] "fsid": "aa31c65c-bc94-4e19-940e-2e124c52ed2e",
[vedavec][DEBUG ] "modified": "0.000000",
[vedavec][DEBUG ] "mons": [
[vedavec][DEBUG ] {
[vedavec][DEBUG ] "addr": "192.168.2.63:6789/0",
[vedavec][DEBUG ] "name": "vedavec",
[vedavec][DEBUG ] "rank": 0
[vedavec][DEBUG ] }
[vedavec][DEBUG ] ]
[vedavec][DEBUG ] },
[vedavec][DEBUG ] "name": "vedavec",
[vedavec][DEBUG ] "outside_quorum": [],
[vedavec][DEBUG ] "quorum": [
[vedavec][DEBUG ] 0
[vedavec][DEBUG ] ],
[vedavec][DEBUG ] "rank": 0,
[vedavec][DEBUG ] "state": "leader",
[vedavec][DEBUG ] "sync_provider": []
[vedavec][DEBUG ] }
[vedavec][DEBUG ] ******************************
**************************************************
[vedavec][INFO ] monitor: mon.vedavec is running
[vedavec][INFO ] Running command: sudo ceph --cluster=ceph
--admin-daemon
/var/run/ceph/ceph-mon.vedavec.asok mon_status
but I see the create keys is hanging:
root 7820 0.1 0.0 138596 12372 ? Ssl 13:10 0:00
/usr/bin/ceph-mon --cluster=ceph -i vedavec -f
root 7822 0.0 0.0 34040 7508 ? Ss 13:10 0:00 python
/usr/sbin/ceph-create-keys --cluster=ceph -i vedavec
root 7839 0.2 0.0 426604 16264 ? Sl 13:10 0:00 python
/usr/bin/ceph --cluster=ceph --name=mon.
--keyring=/var/lib/ceph/mon/ceph-vedavec/keyring
auth get-or-create client.admin mon allow * osd allow * mds allow
In fact its the auth-get or create process that's the issue:
$ sudo ceph --name=mon.
--keyring=/var/lib/ceph/mon/ceph-vedavec/keyring
auth get
2014-03-25 13:22:31.591282 7fc048840700 0 --
192.168.2.63:0/1010637 >>
192.168.2.63:6789/0 pipe(0x7fc04c0202d0 sd=3 :43428 s=1 pgs=0 cs=0
l=1
c=0x7fc04c020530).connect protocol feature mismatch, my fffffffff
< peer
1fffffffff missing 1000000000
So some sort of feature mismatch seems to be happening between
ceph client
and ceph-mon - that seems a bit strange (they ate both from the 0.78
checkout). I see other instances of this type of error are usually
to do
with kernel (rbd etc) clients lacking the new features, but I am a
bit
puzzled about this case here - any thoughts?
Regards
Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com