Re: Ceph monitor ip address issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

many thanks for your feedback. I've redeployed my cluster and now it was working. Last beginner question:

Replication size is by default now since a while to 3. When I set min_size to 1 it means that in a 3 node cluster 2 nodes(doesn't matter which of them) could crash and I have still a working cluster?

Regards - Willi

Am 08.09.15 um 10:23 schrieb Joao Eduardo Luis:
On 09/08/2015 08:13 AM, Willi Fehler wrote:
Hi Chris,

I tried to reconfigure my cluster but my MONs are still using the wrong
network. The new ceph.conf was pushed to all nodes and ceph was restarted.
If your monitors are already deployed, you will need to move them to the
new network manually. Once deployed, the monitors no longer care for
ceph.conf for their addresses, but will use the monmap instead - only
clients will look into ceph.conf to figure out where the monitors are.

You will need to follow the procedure to add/rm monitors [1].

HTH.

   -Joao

[1] http://ceph.com/docs/master/rados/operations/add-or-rm-mons/



[root@linsrv001 ~]# netstat -tulpen
Aktive Internetverbindungen (Nur Server)
Proto Recv-Q Send-Q Local Address           Foreign Address
State       Benutzer   Inode      PID/Program name
tcp        0      0 10.10.10.1:6789         0.0.0.0:*
LISTEN      0          19969      1793/ceph-mon

[root@linsrv001 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4
localhost4.localdomain4
::1         localhost localhost.localdomain localhost6
localhost6.localdomain6
192.168.0.5    linsrv001
192.168.0.6    linsrv002
192.168.0.7    linsrv003

[root@linsrv001 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
mon_initial_members = linsrv001, linsrv002, linsrv003
mon_host = 192.168.0.5,192.168.0.6,192.168.0.7
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_network = 192.168.0.0/24
cluster_network = 10.10.10.0/24

[osd]
osd recovery max active = 1
osd max backfills = 1
filestore max sync interval = 30
filestore min sync interval = 29
filestore flusher = false
filestore queue max ops = 10000
filestore op threads = 2
osd op threads = 2

[client]
rbd cache = true
rbd cache writethrough until flush = true

Regards - Willi

Am 08.09.15 um 08:53 schrieb Willi Fehler:
Hi Chris,

thank you for your support. I will try to reconfigure my settings.

Regards - Willi

Am 08.09.15 um 08:43 schrieb Chris Taylor:
Willi,

Looking at your conf file a second time, it looks like you have the
MONs on the same boxes as the OSDs. Is this correct? In my cluster
the MONs are on separate boxes.

I'm making an assumption with your public_network, but  try changing your
     mon_host = 10.10.10.1,10.10.10.2,10.10.10.3
to
     mon_host = 192.168.0.1,192.168.0.2,192.168.0.3

You might also need to change your hosts file to reflect the correct
names and IP addresses also.



My ceph.conf:

[global]
fsid = d960d672-e035-413d-ba39-8341f4131760
mon_initial_members = ceph-mon1, ceph-mon2, ceph-mon3
mon_host = 10.20.0.11,10.20.0.12,10.20.0.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_network = 10.20.0.0/24
cluster_network = 10.21.0.0/24

[osd]
osd recovery max active = 1
osd max backfills = 1
filestore max sync interval = 30
filestore min sync interval = 29
filestore flusher = false
filestore queue max ops = 10000
filestore op threads = 2
osd op threads = 2

[client]
rbd cache = true
rbd cache writethrough until flush = true




On 09/07/2015 10:20 PM, Willi Fehler wrote:
Hi Chris,

could you please send me your ceph.conf? I tried to set "mon addr"
but it looks like that it was ignored all the time.

Regards - Willi


Am 07.09.15 um 20:47 schrieb Chris Taylor:
My monitors are only connected to the public network, not the
cluster network. Only the OSDs are connected to the cluster network.

Take a look at the diagram here:
http://ceph.com/docs/master/rados/configuration/network-config-ref/

-Chris

On 09/07/2015 03:15 AM, Willi Fehler wrote:
Hi,

any ideas?

Many thanks,
Willi

Am 07.09.15 um 08:59 schrieb Willi Fehler:
Hello,

I'm trying to setup my first Ceph Cluster on Hammer.

[root@linsrv002 ~]# ceph -v
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)

[root@linsrv002 ~]# ceph -s
     cluster 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
      health HEALTH_OK
      monmap e1: 3 mons at
{linsrv001=10.10.10.1:6789/0,linsrv002=10.10.10.2:6789/0,linsrv003=10.10.10.3:6789/0}
             election epoch 256, quorum 0,1,2
linsrv001,linsrv002,linsrv003
      mdsmap e60: 1/1/1 up {0=linsrv001=up:active}, 2 up:standby
      osdmap e622: 9 osds: 9 up, 9 in
       pgmap v1216: 384 pgs, 3 pools, 2048 MB data, 532 objects
             6571 MB used, 398 GB / 404 GB avail
                  384 active+clean

My issue is that I have two networks a public network
192.168.0.0/24 and a cluster network 10.10.10.0/24 and my
monitors should listen on 192.168.0.0/24. Later I want to use
CephFS over the public network.

[root@linsrv002 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
mon_initial_members = linsrv001, linsrv002, linsrv003
mon_host = 10.10.10.1,10.10.10.2,10.10.10.3
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
mon_clock_drift_allowed = 1
public_network = 192.168.0.0/24
cluster_network = 10.10.10.0/24

[root@linsrv002 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4
localhost4.localdomain4
::1         localhost localhost.localdomain localhost6
localhost6.localdomain6
10.10.10.1    linsrv001
10.10.10.2    linsrv002
10.10.10.3    linsrv003

I've deployed my first cluster with ceph-deploy. What should I do
to have :6789 to be listen on the public network?

Regards - Willi



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux