Re: Cluster network and public network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi MJ,

this should work. Note that when using cloned devices all traffic will go through the same VLAN. In that case, I believe you an simply remove the cluster network definition and use just one IP, there is no point having the second IP on the same VLAN. You will probably have to do "noout,nodown" for the flip-over, which probably required a restart of each OSD. I think, however, that a disappearing back network has no real consequences as the heartbeats always go over both. There might be stuck replication traffic for a while, but even this can be avoided with "osd pause".

Our configuration with 2 VLANS is this:

public network: ceph0.81: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000

cluster network: ceph0.82: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000

ceph0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 9000

em1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 9000
em2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 9000
p1p1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 9000
p1p2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 9000
p2p1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 9000
p2p2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 9000

If you already have 2 VLANs with different IDs, then this flip-over is trivial. I did it without service outage.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: mj <lists@xxxxxxxxxxxxx>
Sent: 12 May 2020 13:12:47
To: ceph-users@xxxxxxx
Subject:  Re: Cluster network and public network

Hi,

On 11/05/2020 08:50, Wido den Hollander wrote:
> Great to hear! I'm still behind this idea and all the clusters I design
> have a single (or LACP) network going to the host.
>
> One IP address per node where all traffic goes over. That's Ceph, SSH,
> (SNMP) Monitoring, etc.
>
> Wido

We have an 'old-style' cluster, seperated LAN/cluster network. We would
like to move over to the 'new-style'.

Is it as easy as: define the NICs in a 2x10G LACP bond0, and add both
NICs to the bond0 config, and add configure like:

> auto bond0
> iface bond0 inet static
>     address 192.168.0.5
>     netmask 255.255.255.0

and add our cluster IP as a second IP, like

> auto bond0:1
> iface bond0:1 inet static
>     address 192.168.10.160
>     netmask 255.255.255.0

On all nodes, reboot, and everything will work?

Or are there ceph specifics to consider?

Thanks,
MJ
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux