Thank you again for all the help. I changed the mon_initial_members
in /etc/ceph/es-c1.conf to only include the current (bootstrapped)
monitor and the command succeeds.
That would have been my next question, I saw you having three in the
conf file but only one mentioned, but the custom file names were more
important to resolve.
As for the custom cluster name. Is that something people would
recommend or not?
It depends, do you have plans to use multi-site setup? In that case it
can help to distinguish and don't get confused which cluster you're
currently working on. But I'd say it's easier to stick with the
default if there's no requirement to do otherwise. Also from my memory
of the threads in this list in the last couple of years there haven't
been many people mentioning using custom cluster names, but that's
just my impression, I might be completely wrong. :-)
Zitat von "Tuffli, Chuck" <chuck.tuffli@xxxxxxx>:
Thank you again for all the help. I changed the mon_initial_members
in /etc/ceph/es-c1.conf to only include the current (bootstrapped)
monitor and the command succeeds.
As for the custom cluster name. Is that something people would
recommend or not?
________________________________
From: Tuffli, Chuck <chuck.tuffli@xxxxxxx>
Sent: Thursday, May 13, 2021 9:50 AM
To: Eugen Block <eblock@xxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re: monitor connection error
OK, that change makes the error message go away, but the ceph
command then seemingly hangs:
[centos@cnode-01 ~]$ time sudo ceph --cluster es-c1 --status
^CCluster connection aborted
real 6m33.555s
user 0m0.140s
sys 0m0.041s
________________________________
From: Eugen Block <eblock@xxxxxx>
Sent: Thursday, May 13, 2021 9:15 AM
To: Tuffli, Chuck <chuck.tuffli@xxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re: Re: monitor connection error
Sorry, hit the send the send button too early. Could you try and
rename the keyring file like this?
es-c1.client.admin.keyring:
Zitat von Eugen Block <eblock@xxxxxx>:
I think you need to rename the admin keyring file accordingly to the
cluster name, too.
Zitat von "Tuffli, Chuck" <chuck.tuffli@xxxxxxx>:
Thank you for trying to reproduce my issue. I think I did that step:
[centos@cnode-01 ~]$ ls -l /etc/ceph/ceph.client.admin.keyring
-rw-------. 1 root root 151 May 10 23:20
/etc/ceph/ceph.client.admin.keyring
[centos@cnode-01 ~]$ sudo cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQC6v5lgi0JBAhAAJ9Duj11SufIKidydIgO82Q==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
[centos@cnode-01 ~]$ sudo grep -r AQC6v5 /etc/ceph /tmp
/etc/ceph/ceph.client.admin.keyring: key =
AQC6v5lgi0JBAhAAJ9Duj11SufIKidydIgO82Q==
/tmp/ceph.mon.keyring: key = AQC6v5lgi0JBAhAAJ9Duj11SufIKidydIgO82Q==
[centos@cnode-01 ~]$
________________________________
From: Eugen Block <eblock@xxxxxx>
Sent: Thursday, May 13, 2021 12:37 AM
To: Tuffli, Chuck <chuck.tuffli@xxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re: Re: monitor connection error
I could reproduce your issue, it seems you're missing a keyring file
in /etc/ceph. Did you miss step 9 from the manual deployment guide?
sudo ceph-authtool --create-keyring
/etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap
mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow
*'
Zitat von "Tuffli, Chuck" <chuck.tuffli@xxxxxxx>:
-----Original Message-----
From: Eugen Block [mailto:eblock@xxxxxx]
Sent: Tuesday, May 11, 2021 11:39 PM
To: ceph-users@xxxxxxx
Subject: Re: monitor connection error
Hi,
What is this error trying to tell me? TIA
it tells you that the cluster is not reachable to the client, this
can have various
reasons.
Can you show the output of your conf file?
cat /etc/ceph/es-c1.conf
[centos@cnode-01 ~]$ cat /etc/ceph/es-c1.conf
[global]
fsid = 3c5da069-2a03-4a5a-8396-53776286c858
mon_initial_members = cnode-01,cnode-02,cnode-03
mon_host = 192.168.122.39
public_network = 192.168.122.0/24
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_journal_size = 1024
osd_pool_default_size = 3
osd_pool_default_min_size = 2
osd_pool_default_pg_num = 333
osd_pool_default_pgp_num = 333
osd_crush_chooseleaf_type = 1
[centos@cnode-01 ~]$
Is the monitor service up running? I take it you don't use cephadm
yet so it's not
a containerized environment?
Correct, this is bare metal and not a containerized environment. And
I believe it is running:
[centos@cnode-01 ~]$ sudo systemctl --all | grep ceph
ceph-crash.service
loaded active running Ceph crash dump
collector
ceph-mon@cnode-01.service
loaded active running Ceph cluster
monitor daemon
system-ceph\x2dmon.slice
loaded active active
system-ceph\x2dmon.slice
ceph-mon.target
loaded active active ceph target
allowing to start/stop all ceph-mon@.service instances at once
ceph.target
loaded active active ceph target
allowing to start/stop all ceph*@.service instances at once
[centos@cnode-01 ~]$
Regards,
Eugen
Zitat von "Tuffli, Chuck" <chuck.tuffli@xxxxxxx>:
Hi
I'm new to ceph and have been following the Manual Deployment document
[1]. The process seems to work correctly until step 18 ("Verify that
the monitor is running"):
[centos@cnode-01 ~]$ uname -a
Linux cnode-01 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50
UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[centos@cnode-01 ~]$ ceph -v
ceph version 15.2.11 (e3523634d9c2227df9af89a4eac33d16738c49cb)
octopus (stable)
[centos@cnode-01 ~]$ sudo ceph --cluster es-c1 -s [errno 2] RADOS
object not found (error connecting to the cluster)
[centos@cnode-01 ~]$
What is this error trying to tell me? TIA
[1]
INVALID URI REMOVED
nual-deployment/__;!!NpxR!1-v_Ql6E-l3P_E8DvIfk_YtknPrVFeZ5sFaPHLlsJVY8
PmzP7kySRbr1rYqbFiZ1$
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send
an email to
ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx