Re: monitor connection error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Eugen Block [mailto:eblock@xxxxxx]
> Sent: Tuesday, May 11, 2021 11:39 PM
> To: ceph-users@xxxxxxx
> Subject:  Re: monitor connection error
> 
> Hi,
> 
> > What is this error trying to tell me? TIA
> 
> it tells you that the cluster is not reachable to the client, this can have various
> reasons.
> 
> Can you show the output of your conf file?
> 
> cat /etc/ceph/es-c1.conf

[centos@cnode-01 ~]$ cat /etc/ceph/es-c1.conf
[global]
fsid = 3c5da069-2a03-4a5a-8396-53776286c858
mon_initial_members = cnode-01,cnode-02,cnode-03
mon_host = 192.168.122.39
public_network = 192.168.122.0/24
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_journal_size = 1024
osd_pool_default_size = 3
osd_pool_default_min_size = 2
osd_pool_default_pg_num = 333
osd_pool_default_pgp_num = 333
osd_crush_chooseleaf_type = 1
[centos@cnode-01 ~]$

> Is the monitor service up running? I take it you don't use cephadm yet so it's not
> a containerized environment?

Correct, this is bare metal and not a containerized environment. And I believe it is running:
[centos@cnode-01 ~]$ sudo systemctl --all | grep ceph
  ceph-crash.service                                                                       loaded    active   running   Ceph crash dump collector
  ceph-mon@cnode-01.service                                                                loaded    active   running   Ceph cluster monitor daemon
  system-ceph\x2dmon.slice                                                                 loaded    active   active    system-ceph\x2dmon.slice
  ceph-mon.target                                                                          loaded    active   active    ceph target allowing to start/stop all ceph-mon@.service instances at once
  ceph.target                                                                              loaded    active   active    ceph target allowing to start/stop all ceph*@.service instances at once
[centos@cnode-01 ~]$

> Regards,
> Eugen
> 
> 
> Zitat von "Tuffli, Chuck" <chuck.tuffli@xxxxxxx>:
> 
> > Hi
> >
> > I'm new to ceph and have been following the Manual Deployment document
> > [1]. The process seems to work correctly until step 18 ("Verify that
> > the monitor is running"):
> >
> > [centos@cnode-01 ~]$ uname -a
> > Linux cnode-01 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50
> > UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
> > [centos@cnode-01 ~]$ ceph -v
> > ceph version 15.2.11 (e3523634d9c2227df9af89a4eac33d16738c49cb)
> > octopus (stable)
> > [centos@cnode-01 ~]$ sudo ceph --cluster es-c1 -s [errno 2] RADOS
> > object not found (error connecting to the cluster)
> > [centos@cnode-01 ~]$
> >
> > What is this error trying to tell me? TIA
> >
> > [1]
> > INVALID URI REMOVED
> > nual-deployment/__;!!NpxR!1-v_Ql6E-l3P_E8DvIfk_YtknPrVFeZ5sFaPHLlsJVY8
> > PmzP7kySRbr1rYqbFiZ1$
> _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > email to ceph-users-leave@xxxxxxx
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to
> ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux