Thank you..
After sending the post, I totally removed the mon and issued the build with ceph-deploy:
In the logs now:
2017-01-07 21:12:38.113534 7fa9613fd700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
2017-01-07 21:12:38.113546 7fa9613fd700 0 -- 10.10.10.138:6789/0 >> 10.10.10.103:6789/0 pipe(0x55feb2e90000 sd=12 :50266 s=1 pgs=0 cs=0 l=0 c=0x55feb2ca0a80).failed verifying authorize reply
2017-01-07 21:12:38.114529 7fa95787b700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
2017-01-07 21:12:38.114567 7fa95787b700 0 -- 10.10.10.138:6789/0 >> 10.10.10.252:6789/0 pipe(0x55feb2e91400 sd=11 :38690 s=1 pgs=0 cs=0 l=0 c=0x55feb2ca0c00).failed verifying authorize reply
2017-01-07 21:12:40.114522 7fa9613fd700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
2017-01-07 21:12:40.114542 7fa9613fd700 0 -- 10.10.10.138:6789/0 >> 10.10.10.103:6789/0 pipe(0x55feb2e90000 sd=11 :50278 s=1 pgs=0 cs=0 l=0 c=0x55feb2ca0a80).failed verifying authorize reply
2017-01-07 21:12:40.115706 7fa95787b700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
2017-01-07 21:12:40.115721 7fa95787b700 0 -- 10.10.10.138:6789/0 >> 10.10.10.252:6789/0 pipe(0x55feb2e91400 sd=12 :38702 s=1 pgs=0 cs=0 l=0 c=0x55feb2ca0c00).failed verifying authorize reply
2017-01-07 21:12:41.621916 7fa956f79700 0 cephx: verify_authorizer could not decrypt ticket info: error: NSS AES final round failed: -8190
2017-01-07 21:12:41.621929 7fa956f79700 0 mon.alex-desktop@1(probing) e0 ms_verify_authorizer bad authorizer from mon 10.10.10.103:6789/0
2017-01-07 21:12:41.621944 7fa956f79700 0 -- 10.10.10.138:6789/0 >> 10.10.10.103:6789/0 pipe(0x55feb2fb5400 sd=21 :6789 s=0 pgs=0 cs=0 l=0 c=0x55feb2ca1500).accept: got bad authorizer
$ sudo ceph -s
cluster f5aba719-4856-4ae2-a5d4-f9ff0f614b60
health HEALTH_WARN
512 pgs degraded
348 pgs stale
512 pgs stuck unclean
512 pgs undersized
6 requests are blocked > 32 sec
recovery 25013/50026 objects degraded (50.000%)
mds cluster is degraded
1 mons down, quorum 0,2 alpha,toshiba-laptop
monmap e17: 3 mons at {alex-desktop=10.10.10.138:6789/0,alpha=10.10.10.103:6789/0,toshiba-laptop=10.10.10.252:6789/0}
election epoch 806, quorum 0,2 alpha,toshiba-laptop
fsmap e201858: 1/1/1 up {0=1=up:replay}
osdmap e200229: 3 osds: 2 up, 2 in; 85 remapped pgs
flags sortbitwise
pgmap v4088774: 512 pgs, 4 pools, 50883 MB data, 25013 objects
59662 MB used, 476 GB / 563 GB avail
25013/50026 objects degraded (50.000%)
348 stale+active+undersized+degraded
164 active+undersized+degraded
root@alex-desktop:/var/lib/ceph/mon/ceph-alex-desktop# ls -ls
total 8
0 -rw-r--r-- 1 ceph ceph 0 Jan 7 21:11 done
4 -rw------- 1 ceph ceph 77 Jan 7 21:05 keyring
4 drwxr-xr-x 2 ceph ceph 4096 Jan 7 21:10 store.db
0 -rw-r--r-- 1 ceph ceph 0 Jan 7 21:05 systemd
Very odd... never seen this issue on the other monitor deployments...
On Sat, Jan 7, 2017 at 8:54 PM, Shinobu Kinjo <skinjo@xxxxxxxxxx> wrote:
Using ``ceph-deploy`` will save your life:
# https://github.com/ceph/ceph/blob/master/doc/start/quick- ceph-deploy.rst
* Please look at: Adding MonitorsIf you are using centos or similar, the latest package is available here:Regards,On Sun, Jan 8, 2017 at 9:53 AM, Alex Evonosky <alex.evonosky@xxxxxxxxx> wrote:Thank you for the reply!I followed this article:Under the section: ADDING A MONITOR (MANUAL)On Sat, Jan 7, 2017 at 6:36 PM, Shinobu Kinjo <skinjo@xxxxxxxxxx> wrote:How did you add a third MON?
Regards,
> ______________________________
On Sun, Jan 8, 2017 at 7:01 AM, Alex Evonosky <alex.evonosky@xxxxxxxxx> wrote:
> Anyone see this before?
>
>
> 2017-01-07 16:55:11.406047 7f095b379700 0 cephx: verify_reply couldn't
> decrypt with error: error decoding block for decryption
> 2017-01-07 16:55:11.406053 7f095b379700 0 -- 10.10.10.138:6789/0 >>
> 10.10.10.252:6789/0 pipe(0x55cf8d028000 sd=11 :47548 s=1 pgs=0 cs=0 l=0
> c=0x55cf8ce28f00).failed verifying authorize reply
>
>
>
> Two monitors are up just fine, just trying to add a third and a quorum
> cannot be met. NTP is running and no iptables running at all on internal
> cluster.
>
>
> Thank you.
> -Alex
>
>
>
>
_________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com